Technology News | Time

Trump Requests $844,600 for Fundraiser Seat at Bitcoin Conference

The 2024 Republican National Convention

Donald Trump is inviting supporters in the cryptocurrency industry to a private fund-raising effort in Nashville on July 27, including an asking price of $844,600 for a seat at a round table. 

Donors have also been offered an opportunity to snap a photo with the presidential candidate for $60,000 per person — slightly less than the current price of one Bitcoin — or $100,000 per couple, according to an invitation to the event. The fundraiser will be hosted amid the Bitcoin Conference 2024, an annual event organized by BTC Media LLC for fans of the original cryptocurrency. Trump is set to speak on the main stage of the conference the same day.

[time-brightcove not-tgx=”true”]

The asking price of $844,600 for the round-table seat represents the maximum combined campaign contribution to the Trump campaign and the Republican National Committee that’s allowed under campaign finance laws. 

Special guests in Nashville will include Trump’s vice presidential pick JD Vance, a senator from Ohio, as well as the former president’s Republican primary opponent Vivek Ramaswamy, Tennessee Senator Bill Hagerty and former Hawaii Representative Tulsi Gabbard will be at the reception, according to an email describing the event obtained by Bloomberg from an invitee, who asked not to be identified since the event is private. Attendance will be limited to 100 to 150 donors, who “will enjoy drinks and hors d’oeuvres while mingling with influential guests,” according to the email. Following the reception, guests will get front-row seats to watch Trump deliver a speech on Bitcoin, the message added.

The Trump campaign did not immediately reply to a request for comment, nor did the people listed as special guests at the event. 

The Nashville fundraising effort is the latest sign of the about-face Trump has made when it comes to his stance on crypto. He has expressed support for Bitcoin after meeting with crypto-mining executives at his Mar-a-Lago club last month. Trump told attendees at that event that he loves and understands cryptocurrency and the benefits that Bitcoin miners bring to power grids. That is a departure from his stance on the asset class five years ago when, as president, he said he was not a fan of cryptocurrencies because their values are based on “thin air” and they can facilitate drug trafficking and other crimes.

Source: Tech – TIME | 19 Jul 2024 | 8:29 am

How the Crypto World Learned to Love Donald Trump, J.D. Vance, and Project 2025

Trump Vance

When the pandemic hit in 2020, the DJ and personal trainer Jonnie King stopped getting booked for gigs and workout sessions. So he turned to trading crypto, which was rapidly increasing in value at the time. “I was like, ‘Oh my god, there’s hope for me. I can make money while stuck at home,’” he says. 

[time-brightcove not-tgx=”true”]

Four years later, King is a devout believer who keeps most of his assets in cryptocurrencies. And although he voted for Bernie Sanders in 2016—due to Sanders’ focus on uplifting the working class—King is now a vocal supporter of Donald Trump, due to Trump’s own recent embrace of crypto.

“I can probably say it’s a single vote issue for me, because that’s my livelihood,” King tells TIME. “Crypto is how I save my wealth, and if [the Democrats] are trying to attack that, that’s literally taking my money away from me. How am I supposed to support my family?” 

King exemplifies a growing faction from within the cryptocurrency community supporting Trump with open arms. For years, both during his presidency and after, Trump expressed distrust in crypto. In 2021 he went as far as to say that Bitcoin seemed like a scam. But leading up to the 2024 election, Trump has done an about-face and lavished praise onto the technology. And in just the last week, he took several more significant steps to win over the crypto faithful: he announced an appearance at a Bitcoin conference in Nashville on July 27, a new NFT project, and chose a staunchly pro-crypto vice-presidential candidate in J.D. Vance. 

The crypto world has returned the enthusiasm. Despite any misgivings they may have with other parts of Trump’s platform or criminal convictions, many believe he will provide a significant boon for the industry should be elected. The crypto community on X, formerly known as Twitter, is filled with pro-Trump sentiment, and crypto money is pouring into Trump’s campaign. And in the aftermath of Trump’s shooting, Bitcoin shot up in price, seemingly based on the belief that the event helped Trump’s chances of being elected. 

“Trump has had an incredible and surprisingly positive impact on this space,” Kristin Smith, the CEO of the crypto lobbying group The Blockchain Association, tells TIME. “That was not on my 2024 bingo card.” 

Trump’s crypto U-turn

Trump hasn’t gone into much detail about his newfound love for crypto after criticizing it for so many years. But he has used the industry as a wedge issue, directly contrasting himself with leftist crypto skeptics like Elizabeth Warren. And because the crypto lobby is well-organized and flush with money, it offers Trump a whole lot of potential cash.

Trump has attended several fundraisers full of cryptocurrency executives, who promised to throw him more fundraisers, according to The Washington Post. Crypto moguls Tyler and Cameron Winklevoss each donated $1 million in Bitcoin to Trump, criticizing Biden’s “war against crypto,” and Trump discussed crypto policy with pro-crypto entrepreneur Elon Musk, according to Bloomberg. (Musk has since endorsed Trump.) The price tag of attending a “VIP reception” with Trump at the upcoming Bitcoin conference is a cool $844,600 per person.

When Trump announced his campaign would accept cryptocurrency donations, a statement on his website read that the decision was part of a larger fight against “socialistic government control” over the U.S. financial markets. (Joe Biden hasn’t said much publicly about crypto, but his administration has supported stricter policies designed to protect consumers.)

Read More: Why Donald Trump Is Betting on Crypto

And earlier this month, The Post reported that a Trump advisor added language about crypto to the Republican Party platform, which surprised longtime party members. Part of the passage read: “We will defend the right to mine Bitcoin, and ensure every American has the right to self-custody of their digital assets and transact free from government surveillance and control.” (Government agencies currently use blockchain tracing to track crypto scammers and other criminals.)  

Read More: Inside the Health Crisis of a Texas Bitcoin Town

J.D. Vance, Trump’s VP pick, increases his crypto bona fides

On Monday, Trump further energized crypto fans by choosing the pro-crypto Senator J.D. Vance as his running mate. While running for Senate in 2021, Vance disclosed that he owned over $100,000 worth of Bitcoin. The same year, he called the crypto community “one of the few sectors of our economy where conservatives and other free thinkers can operate without pressure from the social justice mob.” Vance also received significant campaign funding from pro-crypto entrepreneur Peter Thiel. 

Earlier this year, Vance circulated draft legislation to overhaul crypto regulation and make clearer whether specific crypto tokens should be regulated by the SEC or the CFTC. Politico reported that the proposal seems to be “more industry-friendly” than previously-introduced bills. 

The crypto industry has largely cheered the idea of a personal holder of Bitcoin potentially entering the White House next year. “Senator Vance—an emerging voice for fit-for-purpose, pro-innovation crypto legislation—is an ideal candidate to lead the Republican Party’s crypto principles,” Kristin Smith wrote to TIME in an email. 

[video id=sFzczKg5]

Project 2025 also supports the crypto industry

Looming over the election is Project 2025, a far-reaching conservative blueprint led by the Heritage Foundation which spells out the policies that Trump should enact if he is elected, including launching mass deportations and countering “anti-white” discrimination. While Trump distanced himself from the proposal on Truth Social, dozens of Trump allies and former administration officials are connected to the project. 

The crypto industry is excited by crypto-related language in Project 2025. The document calls on the president to abolish the Federal Reserve (whose monetary policies have long been abhorred by crypto advocates) and move the U.S. to a free banking system, in which the dollar is backed by a valuable commodity like gold—or, crypto enthusiasts hope, Bitcoin itself. However, there’s been no indication that Trump or anyone in his administration has considered the idea. The document also calls on regulators to clarify rules around cryptocurrencies, just like Vance is pushing for, which could open the door for greater crypto adoption. 

Read More: What is Project 2025? 

Questions about Trump’s commitment to Bitcoin linger

Despite all this, there are crypto fans who are skeptical that Trump’s sudden embrace of Bitcoin will carry lasting weight beyond an election year talking point. Some of Trump’s avowed policy proposals, which have been described as authoritarian, seem to counteract Bitcoin’s anti-government, libertarian bent. For instance, his call for all Bitcoin mining to be located in the U.S. rubbed certain crypto idealists the wrong way, as decentralization and immunity to governmental pressures is a key part of the ethos of crypto mining.

Moe Vela, a former advisor to Biden and a senior advisor to the cryptocurrency project Unicoin, is skeptical of Trump’s intentions. “It was not long ago that he was bashing crypto,” he says. “The crypto community tends to be a bit inexperienced when it comes to legislation, policy and politics—and I encourage them to not fall prey to the pandering.”

Vela argues that “healthy and balanced” regulation of crypto is essential to the industry’s growth. “If we don’t have regulation that weeds out nefarious actors—and we’ve already seen we have our fair share of bad actors—that weakens trust and confidence in the sector,” he says.

And Vitalik Buterin, main founder of cryptocurrency Ethereum, wrote a blog post on June 17 cautioning crypto enthusiasts not to cast votes simply based on a candidate’s crypto position. “Making decisions in this way carries a high risk of going against the values that brought you into the crypto space in the first place,” he wrote.

Some polls suggest that crypto is still an extremely niche interest. The Federal Reserve found that just 7% of American adults used or held crypto in 2023, and another poll suggested that anti-crypto sentiment remains high. But the crypto industry is convinced that there could be thousands of single-issue crypto voters, like Jonnie King, who will lift Trump in the coming election. 

“Maybe it’s just a politician being a politician to win votes,” King says of Trump’s pro-crypto stance. “I’m not saying any man is perfect. But when Biden is campaigning a war against crypto, the one system that is hope for money, I see that as no way going forward. 

“If Trump can give us some hope—even if it’s just hope—it’s something.” 

Source: Tech – TIME | 18 Jul 2024 | 5:29 am

Malaysia Looks to Criminalize Cyberbullying After TikTok User’s Death

The TikTok logo is seen on a mobile device screen

The death of a Malaysian TikTok user has prompted the government to look into criminalizing cyberbullying and increasing accountability among internet service providers.

Rajeswary Appahu was found dead from apparent suicide on July 5, a day after the 30-year-old lodged a police report over online threats she had received, local media reported. That led to two people pleading guilty before the courts on Tuesday over communication offenses on TikTok, with one of them receiving a 100 ringgit ($21.40) fine as punishment.

[time-brightcove not-tgx=”true”]

Such investigations and prosecutions are difficult because there are no specific provisions for cyberbullying under Malaysian laws, according to Law Minister Azalina Othman Said. The government will consider proposals to define “cyberbullying” and make it a crime under the Penal Code, she added.

“Cyberbullying isn’t a new issue in Malaysia, and each year, we are shocked by news of individuals being bullied, which end with them taking their own lives,” she said in a statement Tuesday.

The government is also refining policy on proposals to draft a bill to increase internet service providers’ accountability on matters of security, she said. It would provide enforcement officers new powers to work closely with internet service providers to protect online users, she said.

The Malaysian Communications and Multimedia Commission said separately it would work with the police to facilitate public complaints on cyberbullying. The commission also planned to hold a nationwide tour to spread its anti-bullying message, it said in a statement Saturday.

If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider. For international resources, click here.

Source: Tech – TIME | 17 Jul 2024 | 6:15 pm

Musk to Move X, SpaceX Headquarters to Texas From California

Twitter Starts To Rebrand Its San Francisco Headquarters With Giant X Logo

Elon Musk said he will relocate the headquarters for X and SpaceX to Texas, a likely symbolic move that adds more fuel to the billionaire’s efforts to align himself with the political right and distance himself from left-leaning California. 

Musk made the announcements on his X social media site Tuesday, citing frustration over a new law in California related to transgender children in public schools. California became the first US state to ban school districts from requiring teachers to notify parents about changes to a student’s sexual orientation and gender identity.

[time-brightcove not-tgx=”true”]

“This is the final straw,” Musk said in the post announcing SpaceX’s relocation.

The move is the latest development in Musk’s shift toward the political right. In the past week, Musk offered a full-throated endorsement of former President Donald Trump in the upcoming US election, and will also donate tens of millions of dollars to Trump’s campaign every month. He has long criticized California’s liberal politics, and has threatened to pull X and his other businesses out of the state on numerous occasions. 

This is the final straw.

Because of this law and the many others that preceded it, attacking both families and companies, SpaceX will now move its HQ from Hawthorne, California, to Starbase, Texas. https://t.co/cpWUDgBWFe

— Elon Musk (@elonmusk) July 16, 2024

SpaceX’s headquarters is currently in Hawthorne, California, but the company has been building out a large facility in South Texas dubbed Starbase over the last few years. The site in Boca Chica is the primary location where SpaceX builds and launches its massive Starship rocket system, and the company recently added a massive warehouse factory at Starbase known as the Starfactory, which replaced many of the site’s production tents.

X’s headquarters is currently in San Francisco, though the company put several floors of its main building up for lease last week. It was still expected to retain some of that space for employees. In January, X said it was planning to open a small office in Austin to help deal with content moderation problems.

SpaceX has roughly 13,000 employees. Its Hawthorne facility has been the primary location for production and processing of the company’s Falcon 9 workhorse rocket, as well as the larger, more powerful Falcon Heavy rocket.

Texas Governor Greg Abbott said in an X post that the move “cements Texas as the leader in space exploration.”

The new California law may be personal for Musk. One of his eldest children went to court the day after they turned 18 in 2022 and changed their name, citing “gender identity and the fact that I no longer live with or wish to be related to my biological father in any way, shape or form,” according to court filings.

Musk also has several ties to Texas already. His electric car company, Tesla Inc., earlier this year moved its business incorporation to Texas from Delaware, and similarly moved its headquarters from California to Austin in 2021 amid frustration with pandemic lockdowns. 

But Tesla still has a sizable presence in the Golden State, with an engineering headquarters in Palo Alto. 

Musk also moved his personal residence to Texas several years ago. 

Source: Tech – TIME | 17 Jul 2024 | 8:15 am

Hong Kong Testing ChatGPT-Style Tool After OpenAI Took Steps to Block Access

OpenAI and ChatGPT

HONG KONG — Hong Kong’s government is testing the city’s own ChatGPT -style tool for its employees, with plans to eventually make it available to the public, its innovation minister said after OpenAI took extra steps to block access from the city and other unsupported regions.

Secretary for Innovation, Technology and Industry Sun Dong said on a Saturday radio show that his bureau was trying out the artificial intelligence program, whose Chinese name translates to “document assistance application for civil servants,” to further improve its capabilities. He plans to have it available for the rest of the government this year.

[time-brightcove not-tgx=”true”]

The program was developed by a generative AI research and development center led by the Hong Kong University of Science and Technology in collaboration with several other universities.

Sun said the model would provide functions like graphics and video design in the future. To what degree it would compare to the capabilities of ChatGPT was unclear.

Sun’s bureau did not respond to The Associated Press’ questions about the model’s functions.

Sun said on the radio show that industry players and the government would play a role in the model’s future development.

“Given Hong Kong’s current situation, it’s difficult for Hong Kong to get giant companies like Microsoft and Google to subsidize such projects, so the government had to start doing it,” he said.

Beijing and Washington are locked in a race for AI supremacy, with China having ambitions to become the global leader in AI by 2030.

China, including Hong Kong and neighboring Macao, is not on the list of “supported countries and territories” of OpenAI, one of the best-known artificial intelligence companies.

The ChatGPT maker has not explained why certain territories were excluded but said accounts in those places attempting to access its services may become blocked.

According to a post on OpenAI’s online forum and local media reports, the company announced in an email to some users that it would be taking additional measures to block connections from regions not on the approved list starting July 9. It did not explain the reasons behind the latest move.

Francis Fong, the honorary president of the Hong Kong Information Technology Federation, said it was hard to say whether the capabilities of the program in Hong Kong could match those of ChatGPT. With the input of AI companies in the city, Fong said he believed it could technologically catch up with the standards.

“Will it become the top? Maybe may not necessarily be as close as that. But I believe it won’t be too far behind,” he said.

He also said a locally developed AI program might more accurately address local language and localized issues, but adding it would “make sense” if the final product appears to be “politically correct.”

Like most foreign websites and applications, ChatGPT is technically unavailable in China because of the country’s firewall, which censors the internet for residents. Determined individuals can still gain access via commonly available “virtual private networks” that bypass restrictions.

Chinese tech giants such as Alibaba and Baidu have already rolled out primarily Chinese-language AI models similar to ChatGPT for public and commercial use. However, these AI models must abide by China’s censorship rules.

In May, China’s cyberspace academy said an AI chatbot was being trained on President Xi Jinping’s doctrine, a stark reminder of the ideological parameters within which Chinese AI models will operate.

Also in May, SenseTime, a major Chinese artificial intelligence company, launched SenseChat for users in Hong Kong, where most of the population speaks Cantonese as their mother tongue rather than Mandarin. But a check on Tuesday found the application could not provide answers to politically sensitive questions, such as what the Tiananmen crackdown in 1989 and Hong Kong’s protests in 2019 were about.

During the 1989 crackdown, Chinese troops opened fire on student-led pro-democracy protesters, resulting in hundreds, if not thousands, dead, and that remains a taboo subject in mainland China.

In 2019, protests that started over unpopular Hong Kong legislation morphed into an anti-government movement and the greatest political challenge to Beijing’s rule since the former British colony returned to China in 1997.

Source: Tech – TIME | 16 Jul 2024 | 10:42 pm

Here’s What AT&T Customers Impacted By the Major Data Security Breach Should Do Now

In this photo illustration, the world's largest

On Friday, AT&T announced that the data of nearly all of their over 100 million customers was downloaded to a third-party platform in a security breach dating back to 2022. The affected parties include AT&T’s cellular customers, customers of mobile virtual network operators using AT&T’s wireless network, and other phone numbers that an AT&T wireless number interacted with during this time, including AT&T landline customers.

[time-brightcove not-tgx=”true”]

A company investigation determined that compromised data includes files containing AT&T records of calls and texts between May 1, 2022 and Oct. 31, 2022, as well as on Jan. 2, 2023. But they confirmed that the breach did not include the content of any said calls or texts, nor the timestamps. It also doesn’t have any details such as Social Security numbers, dates of birth, or other personally identifiable information.

The company has shared advice to customers on what the breach means for their data safety and how to protect themselves.

Luckily, AT&T does not believe that the data is publicly available, yet does not know what exactly is being done with it. 

“We have confirmed that the affected third-party cloud-based workspace has been secured,” AT&T spokesperson Alex Byers told TIME in an emailed statement. “We sincerely regret this incident occurred and remain committed to protecting the information in our care.”

AT&T says it is contacting the customers whose data was compromised by the data breach. Customers can also check the status of their myAT&T, FirstNet, and business AT&T accounts to see if their data was affected through their account profile.

Until December 2024, those impacted by the data breach will be able to receive the phone numbers of the calls and texts compromised by the data breach. Current customers can request this data through their AT&T profile. Active AT&T wireless and home phone customers can get help here, while AT&T Prepaid customers can submit a data request.

Prior customers who were with AT&T during the affected time frame can access their breached data through a data request. If customers cannot provide their case number, they can still submit a legal demand subpoena to their registered agent, CT Corp, for handling and processing, according to AT&T.

AT&T’s website also recommends customers protect themselves from phishing and scamming through multiple avenues, including only opening text messages from people that customers know, never replying to a text from an unknown sender with personal details, going directly to a company’s website, and looking for the “s” after the http in the address of a website to ensure its security. 

The telecommunications giant also recommended that customers forward suspicious text activity AT&T—a free service that does not count towards any text plan—and report fraud to AT&T’s fraud team.

Source: Tech – TIME | 13 Jul 2024 | 5:17 am

What We Know About the New U.K. Government’s Approach to AI

Labour Party Conference 2023

When the U.K. hosted the world’s first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would “tip the balance in favor of humanity.” At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government’s effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world’s first AI Safety Institute—a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart. 

[time-brightcove not-tgx=”true”]

Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government. His approach to AI has been described as potentially tougher than Sunak’s.  

Starmer appointed Peter Kyle as science and technology minister, giving the lawmaker oversight of the U.K.’s AI policy at a crucial moment, as governments around the world grapple with how to foster innovation and regulate the rapidly developing technology. Following the election result, Kyle told the BBC that “unlocking the benefits of artificial intelligence is personal,” saying the advanced medical scans now being developed could have helped detect his late mother’s lung cancer before it became fatal.

Alongside the potential benefits of AI, the Labour government will need to balance concerns from the public. An August poll of over 4,000 members of the British public conducted by the Centre for Data Ethics and Innovation found 45% respondents believed AI taking people’s jobs represented one of the biggest risks posed by the technology; 34% believed loss in human creativity and problem solving was one of the greatest risks.

Here’s what we know so far about Labour’s approach to artificial intelligence.

Regulating AI

One of the key issues for the Labour government to tackle will likely be how to regulate AI companies and AI-generated content. Under the previous Conservative-led administration, the Department for Science, Innovation and Technology (DSIT) held off on implementing rules, saying that “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI,” in a 2024 policy paper about AI regulation. Labour has signaled a different approach, promising in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models,” suggesting a greater willingness to intervene in the rapidly evolving technology’s development.

Read More: U.S., U.K. Announce Partnership to Safety Test AI Models

Labour has also pledged to ban sexually explicit deepfakes. Unlike proposed legislation in the U.S., which would allow victims to sue those who create non-consensual deepfakes, Labour has considered a proposal by Labour Together, a think-tank with close ties to the current Labour Party, to impose restrictions on developers by outlawing so-called nudification tools

While AI developers have made agreements to share information with the AI Safety Institute on a voluntary basis, Kyle said in a February interview with the BBC that Labour would make that information-sharing agreement a “statutory code.”

Read More: To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy

“We would compel by law, those test data results to be released to the government,” Kyle said in the interview.

Timing regulation is a careful balancing act, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute.

“The art form is to be right on time with law. That means not too early, not too late,” she says. “The last thing that you want is a hastily thrown together policy that stifles innovation and does not protect human rights.”

Watchter says that striking the right balance on regulation will require the government to be in “constant conversation” with stakeholders such as those within the tech industry to ensure the government has an inside view of what is happening at the cutting edge of AI development when formulating policy. 

Kirsty Innes, director of technology policy at Labour Together points to the U.K. Online Safety Act, which was signed into law last October as a cautionary tale of regulation failing to keep pace with technology. The law, which aims to protect children from harmful content online, took 6 years from the initial proposal being made to finally being signed in.

“During [those 6 years] people’s experiences online transformed radically. It doesn’t make sense for that to be your main way of responding to changes in society brought by technology,” she says. “You’ve got to be much quicker about it now.”

Read More: The 3 Most Important AI Policy Milestones of 2023

There may be lessons for the U.K. to learn from the E.U. AI Act, Europe’s comprehensive regulatory framework passed in March, which will come into force on August 1 and become fully applicable to AI developers in 2026. Innes says that mimicking the E.U. is not Labour’s endgame. The European law outlines a tiered risk classification for AI use cases, banning systems deemed to pose unacceptable risks, such as social scoring systems, while placing obligations on providers of high-risk applications like those used for critical infrastructure. Systems said to pose limited or minimal risk face fewer requirements. Additionally, it sets out rules for “general-purpose AI”, which are systems with a wide range of uses, like those underpinning chatbots such as OpenAI’s ChatGPT. General-purpose systems trained on large amounts of computing power—such as GPT-4—are said to pose “systemic risk,” and developers will be required to perform risk assessments as well as track and report serious incidents.

“I think there is an opportunity for the U.K. to tread a nuanced middle ground somewhere between a very hands-off U.S. approach and a very regulatory heavy E.U. approach,” says Innes.

Read More: There’s an AI Lobbying Frenzy in Washington. Big Tech Is Dominating

In a bid to occupy that middle ground, Labour has pledged to create what it calls the Regulatory Innovation Office, a new government body that will aim to accelerate regulatory decisions.

“Part of the idea of the Regulatory Innovation Office is to help regulators develop the capacity that they need a bit quicker and to give them the kind of stimulus and the nudge to be more agile,” says Innes.

A ‘pro-innovation’ approach

In addition to helping the government respond more quickly to the fast-moving technology, Labour says the “pro-innovation” regulatory body will speed up approvals to help new technologies get licensed faster. The party said in its manifesto that it would implement AI into healthcare to “transform the speed and accuracy of diagnostic services, saving potentially thousands of lives.”

Healthcare is just one area where Kyle hopes to use AI. On July 8, he announced the revamp of the DSIT, which will bring on AI experts to explore ways to improve public services.

Meanwhile former Labour Prime Minister Tony Blair has encouraged the new government to embrace AI to improve the country’s welfare system. A July 9 report by his think tank the Tony Blair Institute for Global Change, concluded AI could save the U.K. Department for Work and Pensions more than $1 billion annually.

Blair has emphasized AI’s importance. “Leave aside the geopolitics, and war, and America and China, and all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other,” Blair said, speaking on the Dwarkesh Podcast in June.

Read more: How a New U.N. Advisory Group Wants to Transform AI Governance

Modernizing public services is part of Labour’s wider strategy to leverage AI to grow the U.K. tech sector. Other measures include making it easier to set up data centers in the U.K., creating a national data library to bring existing research programs together, and offering decade-long research and development funding cycles to support universities and start-ups.

Speaking to business and tech leaders in London last March, Kyle said he wanted to support “the next 10 DeepMinds to start up and scale up here within the U.K.” 

Workers’ rights

Artificial intelligence-powered tools can be used to monitor worker performance, such as grading call center-employees on how closely they stick to the script. Labour has committed to ensuring that new surveillance technologies won’t find their way into the workplace without consultation with workers. The party has also promised to “protect good jobs” but, beyond committing to engage with workers, has offered few details on how. 

Read More: As Employers Embrace AI, Workers Fret—and Seek Input

“That might sound broad brush, but actually a big failure of the last government’s approach was that the voice of the workforce was excluded from discussions,” says Nicola Smith, head of rights at the Trades Union Congress, a union-group.  

While Starmer’s new government has a number of urgent matters to prioritize, from setting out its legislative plan for year one to dealing with overcrowded prisons, the way it handles AI could have far-reaching implications.

“I’m constantly saying to my own party, the Labour Party [that] ‘you’ve got to focus on this technology revolution. It’s not an afterthought,” Blair said on the Dwarkesh Podcast in June. “It’s the single biggest thing that’s happening in the world today.”

Source: Tech – TIME | 13 Jul 2024 | 5:06 am

Data of Nearly All AT&T Customers Downloaded to Third-Party Platform in 2022 Security Breach

AT&T Sate Breach

The data of nearly all customers of the telecommunications giant AT&T was downloaded to a third-party platform in a 2022 security breach, the company said Friday, in a year already rife with massive cyberattacks.

The breach hit customers of AT&T’s cellular customers, customers of mobile virtual network operators using AT&T’s wireless network, as well as its landline customers interacted with those cellular numbers.

[time-brightcove not-tgx=”true”]

A company investigation determined that compromised data includes files containing AT&T records of calls and texts between May 1, 2022 and Oct. 31, 2022.

AT&T has more than 100 million customers in the U.S. and almost 2.5 million business accounts.

The company said Friday that it has launched an investigation and engaged with cybersecurity experts to understand the nature and scope of the criminal activity.

“The data does not contain the content of calls or texts, personal information such as Social Security numbers, dates of birth, or other personally identifiable information,” AT&T said Friday.

The compromised data also doesn’t include some information typically seen in usage details, such as the time stamp of calls or texts, the company said. The data doesn’t include customer names, but the AT&T said that there are often ways, using publicly available online tools, to find the name associated with a specific telephone number.

AT&T said that it currently doesn’t believe that the data is publicly available.

The compromised data also includes records from Jan. 2, 2023, for a very small number of customers. The records identify the telephone numbers an AT&T or MVNO cellular number interacted with during these periods. For a subset of records, one or more cell site identification number(s) associated with the interactions are also included.

The company continues to cooperate with law enforcement on the incident and that it understands that at least one person has been apprehended so far.

The year has already been marked by several major data breaches, including an earlier attack on AT&T. In March AT&T said that a dataset found on the “dark web” contained information such as Social Security numbers for about 7.6 million current AT&T account holders and 65.4 million former account holders.

AT&T said at the time that it had already reset the passcodes of current users and would be communicating with account holders whose sensitive personal information was compromised.

There’s also been major disruptions at car dealerships in North America after software provider CDK Global faced back-to-back cyberattacks. And Alabama’s education superintendent said earlier this month that some data was “breached” during a hacking attempt at the Alabama State Department of Education.

Shares of AT&T Inc., based in Dallas, fell more than 2% before the markets opened on Friday.

Source: Tech – TIME | 13 Jul 2024 | 1:19 am

European Union Says X’s Blue Checks Are Deceptive, Transparency Falls Short Under Social Media Law

Elon Mush

LONDON — The European Union says blue checkmarks from Elon Musk’s X are deceptive and that the online platform falls short on transparency and accountability requirements in the first charges against a tech company since the bloc’s new social media regulations took effect.

The European Commission outlined on Friday the preliminary findings from its investigation into X, formerly known as Twitter, under the 27-nation bloc’s Digital Services Act.

The rulebook, also known as the DSA, is a sweeping set of regulations that requires platforms to take more responsibility for protecting users and cleaning up their sites.

Regulators took aim at X’s blue checks, saying they constitute “dark patterns” that are not in line with industry best practice and can be used by malicious actors to deceive users.

After Musk bought the site in 2022, it started issuing the verification marks to anyone who paid $8 per month for one. Before Musk’s acquisition, they mirrored verification badges common on social media and were largely reserved for celebrities, politicians and other influential accounts.

Source: Tech – TIME | 12 Jul 2024 | 10:33 pm

Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

President Biden Delivers Remarks On Artificial Intelligence

On June 8, Republicans adopted a new party platform ahead of a possible second term for former President Donald Trump. Buried among the updated policy positions on abortion, immigration, and crime, the document contains a provision that has some artificial intelligence experts worried: it vows to scrap President Joe Biden’s executive order on AI.

[time-brightcove not-tgx=”true”]

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the platform reads.

Biden’s executive order on AI, signed last October, sought to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services. It requires developers of the most powerful AI systems to share their safety test results with the U.S. government and calls on federal agencies to develop guidelines for the responsible use of AI domains such as criminal justice and federal benefits programs.

Read More: Why Biden’s AI Executive Order Only Goes So Far

Carl Szabo, vice president of industry group NetChoice, which counts Google, Meta, and Amazon among its members, welcomes the possibility of the executive order’s repeal, saying, “It would be good for Americans and innovators.”

“Rather than enforcing existing rules that can be applied to AI tech, Biden’s Executive Order merely forces bureaucrats to create new, complex burdens on small businesses and innovators trying to enter the marketplace. Over-regulating like this risks derailing AI’s incredible potential for progress and ceding America’s technological edge to competitors like China,” said Szabo in a statement.

However, recent polling shared exclusively with TIME indicates that Americans on both sides of the political aisle are skeptical that the U.S. should avoid regulating AI in an effort to outcompete China. According to the poll conducted in late June by the AI Policy Institute (AIPI), 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”

Dan Hendrycks, director of the Center for Safe AI, says, “AI safety and risks to national security are bipartisan issues. Poll after poll shows Democrats and Republicans want AI safety legislation.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

The proposal to remove the guardrails put in place by Biden’s executive order runs counter to the public’s broad support for a measured approach to AI, and it has prompted concern among experts. Amba Kak, co-executive director of the AI Now Institute and former senior advisor on AI at the Federal Trade Commission, says Biden’s order was “one of the biggest achievements in the last decade in AI policy,” and that scrapping the order would “feel like going back to ground zero.” Kak says that Trump’s pledge to support AI development rooted in “human flourishing” is a subtle but pernicious departure from more established frameworks like human rights and civil liberties.

Ami Fields-Meyer, a former White House senior policy advisor on AI who worked on Biden’s executive order, says, “I think the Trump message on AI is, ‘You’re on your own,’” referring to how repealing the executive order would end provisions aimed at protecting people from bias or unfair decision-making from AI.

NetChoice and a number of think tanks and tech lobbyists have railed against the executive order since its introduction, arguing it could stifle innovation. In December, venture capitalist and prominent AI investor Ben Horowitz criticized efforts to regulate “math, FLOPs and R&D,” alluding to the compute thresholds set by Biden’s executive order. Horowitz said his firm would “support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.”

While Trump has previously accused tech companies like Google, Amazon, and Twitter of working against him, in June, speaking on Logan Paul’s podcast, Trump said that the “tech guys” in California gave him $12 million for his campaign. “They gave me a lot of money. They’ve never been into doing that,” Trump said.

The Trump campaign did not respond to a request for comment.

Even if Trump is re-elected and does repeal Biden’s executive order, some changes wouldn’t be felt right away. Most of the leading AI companies agreed to voluntarily share safety testing information with governments at an international summit on AI in Seoul last May, meaning that removing the requirements to share information under the executive order may not have an immediate effect on national security. But Fields-Meyer says, “If the Trump campaign believes that the rigorous national security safeguards proposed in the executive order are radical liberal ideas, that should be concerning to every American.”

Fields-Meyer says the back and forth over the executive order underscores the importance of passing federal legislation on AI, which “would bring a lot more stability to AI policy.” There are currently over 80 bills relating to AI in Congress, but it seems unlikely any of them will become law in the near future.

Sandra Wachter, a professor of technology regulation at the Oxford Internet Institute says Biden’s executive order was “a seminal step towards ensuring ethical AI and is very much on par with global developments in the UK, the EU, Canada, South Korea, Japan, Singapore and the rest of the world.” She says she worries it will be repealed before it has had a chance to have a lasting impact. “It would be a very big loss and a big missed opportunity if the framework was to be scrapped and AI governance to be reduced to a partisan issue,” she says. “This is not a political problem, this is a human problem—and a global one at that.”

Correction, July 11

The original version of this story misidentified a group that has spoken out against Biden’s executive order. It is NetChoice, not TechNet.

Source: Tech – TIME | 11 Jul 2024 | 2:59 am









© 澳纽网 Ausnz.net