Technology News | World

Exclusive: Google Contract Shows Deal With Israel Defense Ministry

Hundreds of protestors gather outside Google's offices in San Francisco for Palestine

Google provides cloud computing services to the Israeli Ministry of Defense, and the tech giant has negotiated deepening its partnership during Israel’s war in Gaza, a company document viewed by TIME shows.

The Israeli Ministry of Defense, according to the document, has its own “landing zone” into Google Cloud—a secure entry point to Google-provided computing infrastructure, which would allow the ministry to store and process data, and access AI services.

[time-brightcove not-tgx=”true”]

The ministry sought consulting assistance from Google to expand its Google Cloud access, seeking to allow “multiple units” to access automation technologies, according to a draft contract dated March 27, 2024. The contract shows Google billing the Israeli Ministry of Defense over $1 million for the consulting service. 

The version of the contract viewed by TIME was not signed by Google or the Ministry of Defense. But a March 27 comment on the document, by a Google employee requesting an executable copy of the contract, said the signatures would be “completed offline as it’s an Israel/Nimbus deal.” Google also gave the ministry a 15% discount on the original price of consulting fees as a result of the “Nimbus framework,” the document says.

Project Nimbus is a controversial $1.2 billion cloud computing and AI agreement between the Israeli government and two tech companies: Google and Amazon. Reports in the Israeli press have previously indicated that Google and Amazon are contractually barred from preventing specific arms of the Israeli state using their technology under Project Nimbus. But this is the first time the existence of a contract showing that the Israeli Ministry of Defense is a Google Cloud customer has been made public. 

Google recently described its work for the Israeli government as largely for civilian purposes. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education,” a Google spokesperson told TIME for a story published on April 8. “Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

Contacted on April 10 with questions about the Ministry of Defense contract, a Google spokesperson declined to comment further.

Read More: Exclusive: Google Workers Revolt Over $1.2 Billion Contract With Israel

The news comes after recent reports in the Israeli media have alleged the country’s military, controlled by the Ministry of Defense, is using an AI-powered system to select targets for air-strikes on Gaza. Such an AI system would likely require cloud computing infrastructure to function. The Google contract seen by TIME does not specify for what military applications, if any, the Ministry of Defense uses Google Cloud, and there is no evidence Google Cloud technology is being used for targeting purposes. But Google employees who spoke with TIME said the company has little ability to monitor what customers, especially sovereign nations like Israel, are doing on its cloud infrastructure. 

The Israeli Ministry of Defense did not respond to requests for comment.


The Israeli Ministry of Defense’s attempt to onboard more units to Google Cloud is described in the contract as “phase 2” of a wider project to build out the ministry’s cloud architecture.

The document does not explicitly describe phase one, but does refer to earlier work carried out by Google on behalf of the ministry. The ministry, the contract says, “has [already] established a Google Cloud Landing Zone infrastructure as part of their overall cloud strategy and to enable [the Ministry of Defense] to move applications to Google Cloud Platform.”

For “phase 2” of the project, the contract says, the Ministry of Defense “is looking to enable its Landing Zone to serve multiple units and sub-units. Therefore, [the Ministry of Defense] would like to create several different automation modules within their Landing Zone based on Google’s leading practices for the benefit of different units, with proper processes to support, and to implement leading practices for security and governance architecture using Google tools.”

The consulting services on offer by Google are for the tech company to “assist with architecture design, implementation guidance, and automation” for the Ministry of Defense’s Google Cloud landing zone, the contract says. The estimated start date is April 14, and Google’s consulting services are expected to take one calendar year to complete.

Two Google workers have resigned in the last month in protest against Project Nimbus, TIME previously reported.

Source: Tech – TIME | 12 Apr 2024 | 11:44 pm

In the Face of U.S. Ban Threats, TikTok’s Parent Company is More Profitable Than Ever

TikTok

While TikTok faces an uncertain future in the U.S., its Chinese parent company continues to rake in cash. On Wednesday, Bloomberg reported that ByteDance’s profit jumped 60% in 2023 to more than $40 billion, compared to $25 billion in 2022, citing people familiar with the matter. Despite a slowed Chinese economy, ByteDance has been buoyed by TikTok’s massive global popularity, especially in the U.S.—TikTok has 170 million American users, and a Pew study from January found it was the fastest-growing social media platform in the country. 

[time-brightcove not-tgx=”true”]

This marks the first time that ByteDance has outpaced its rival Tencent in revenue and profit. Last fall, ByteDance unveiled TikTok Shop in the U.S., in which entrepreneurs can sell products directly from the app itself. In China, ByteDance’s app Douyin has rolled out e-commerce and food delivery features. The company is also building on its own chatbots and its own large language model in the hopes of competing with OpenAI. 

But as TikTok has grown in influence and usership in the U.S., backlash has also ballooned. In March, the U.S. House of Representatives passed a bill giving ByteDance two choices: sell the app, or face a ban in the U.S. The bill has received wide bipartisan report: New York Democrat Chuck Schumer indicated that working on the bill was a priority, and President Biden said he would sign the bill if it landed on his desk. 

Read More: What to Know About the Bill That Could Get TikTok Banned in the U.S.

But Republican leaders have been especially vocal in speaking out against TikTok recently, arguing that the app is a haven for Chinese propaganda. Former Vice President Mike Pence has spent millions via his policy organization on a campaign to pressure the Senate to pass the bill, and called its passage a “vitally important national security measure.” Mitch McConnell called TikTok “one of Beijing’s favorite tools of coercion and espionage.” ByteDance admitted in 2022 that former employees “misused their authority” to surveil American journalists on TikTok. 

But the bill has slowed in the Senate, and faces criticism from free speech advocates and TikTok users. Defenders of the app bombarded Congressional offices with calls urging them not to ban the app. And many entrepreneurs who have come to rely on the app for their businesses have also begun speaking out. 

“Cutting through the connective tissue of the app will sever important ways that Americans—especially young Americans—are speaking at a time when those conversations are as rich as ever,” wrote Scott Nover in a TIME ideas essay. 

In the event that the bill passes the Senate, ByteDance faces several major obstacles to actually sell their prized app. The Chinese government has signaled that it will not allow a forced sale of TikTok. And given that the app is likely worth tens of billions of dollars, that price tag is only feasible for a handful of American tech giants like Google or Meta—which brings antitrust concerns into play. 

It is unclear which presidential candidate would be tougher on TikTok following the election. Both have sent mixed messages: Donald Trump reversed his calls for a TikTok ban, saying in March that banning the app would only increase the size of Facebook, which he called an “enemy of the people.” Biden’s National Security Council called the anti-TikTok bill “an important and welcome step.” But his campaign also recently joined TikTok in an attempt to drum up younger voter enthusiasm. 

Bloomberg wrote that ByteDance’s internal figures had not been independently audited. A representative for ByteDance declined to comment: “We don’t comment on market speculations,” they wrote. 

Source: Tech – TIME | 11 Apr 2024 | 4:40 am

The Huge Risks From AI In an Election Year

President Biden Speaks On Lowering Costs

On the eve of New Hampshire’s primary election, a flood of robocalls exhorted Democratic voters to sit out a write-in campaign supporting President Joe Biden during the state’s presidential primary. An AI-generated voice on the line matched the uncanny cadence and signature catchphrase— (“malarkey!”)—characteristic to Biden. From that call to fake creations envisioning a cascade of calamities under Biden’s watch to AI deepfakes of a Slovakian candidate for country leader pondering vote rigging and raising beer prices, AI is making its mark on elections worldwide. Against this backdrop, governments and several tech companies are taking some steps to mitigate risks—European lawmakers just approved a watershed law, and as recently as February tech companies signed a pledge at the Munich Security Conference. But much more needs to be done to protect American democracy.

[time-brightcove not-tgx=”true”]

In Munich, companies including OpenAI, Apple, Meta, Microsoft, TikTok, Google, X, and others announced a compact to undertake measures to protect elections as America and other countries go to the polls in 2024. The companies pledged to help audiences track the origin of AI-generated and authentic content, to try to detect deceptive AI media in elections, and to deploy “reasonable precautions” to curb risks from AI-fueled election trickery. While not unwelcome, the success of the compact will depend on how its commitments are executed. They were couched in slippery language—“proportionate responses,” “where appropriate and technically feasible,” “attempting to,” and so on—that give latitude to companies to do very little if they so choose. While some are taking further steps, of course, the urgency of the situation demands stronger and more universal action.

This year is the first national American election since AI developments will allow fraudsters to produce phony but close-to-perfect pictures, video, or audio of candidates and officials doing or saying almost anything with minimal time, little to no cost, and on a widespread basis. And the technology underlying generative AI chatbots lets hoaxers spoof election websites or spawn pretend news sites in a matter of seconds and on a mammoth scale. It also gives bad actors— foreign and domestic—the ability to conduct supercharged, hard-to-track interactive influence campaigns.

As generative AI is integrated into common search engines and voters converse with chatbots, people seeking basic information about elections have at times been met with misinformation, pure bunkum, or links to fringe websites. A recent study by AI Democracy Projects and Proof News indicated that popular AI tools—including Google’s Gemini, OpenAI’s GPT-4, and Meta’s Llama 2— “performed poorly on accuracy” when fed certain election questions. The more traditional, non-generative AI could fuel mass challenges to thousands of voters’ eligibility, risking wrongful purges from voter rolls and burdening election offices. As election officials consider using new AI tools in their day-to-day tasks, the lack of meaningful regulation risks putting some voting access in harm’s way even as AI unlocks time-saving opportunities for short-staffed offices.

Before the Munich conference the world’s premier generative AI operation, OpenAI, had recently announced a slate of company policies designed to curtail election harms as America votes in 2024. These include forbidding users from building custom AI chatbots that impersonate candidates, barring users from deploying OpenAI tools to spread falsehoods about when and how to vote, and encrypting images with digital codes that will help observers figure out whether OpenAI’s Dall-E made an image that is multiplying in the wild.

But these actions—while more robust than those of some other major AI companies to date—fall short in important ways and underscore the limitations of what we may see in future months as OpenAI and other tech giants make gestures towards honoring the commitments made in Munich. First: the company’s public-facing policies do not call out several core false narratives and depictions that have haunted prior election cycles and are likely to be resurrected in new guises this year. For example, they do not expressly name fakeries that supposedly show election officials interfering with the vote count, fabrications of unreliable or impaired voting machines, or baseless claims that widespread voter fraud has occurred. (According to Brennan Center tracking, these rank among the most common false narratives promoted by election deniers in the 2022 midterm elections.) While OpenAI policies addressing misleading others and intentional deception arguably cover some, or all, such content, specifically naming these categories as being barred from creation and spread would give more clarity to users and protection to voters. Since election procedures— and the occasional fast-resolved Election Day glitch—vary from county to county, the company should have conduits for the sharing of information between local election officials and OpenAI staff in the months leading up to the election.

Perhaps most importantly, the tech wunderkind needs to do more to curb the risks of working with third party developers—that is, the companies that license OpenAI’s technology and integrate its models into their own services and products. For instance, if a user enters basic election questions into a third-party search engine that cannibalizes OpenAI models, the answers can be rife with errors, outdated facts, and other blunders. (WIRED reporting on Microsoft Copilot last December revealed several issues at that time with the search engine—which uses OpenAI technology— though some of those problems may have since been addressed.) To better protect voters this election season, the company must create and enforce stricter requirements for its partnerships with developers that integrate OpenAI models into search engines—or any digital platform where voters may go to seek information on the ABCs of voting and elections. While the accord signed in Munich was targeted to intentionally deceptive content, voters can also be harmed by AI “hallucinations”—fabrications spit out by systems’ algorithms—or other hiccups that AI creators and service providers have failed to prevent.

But, OpenAI can’t go it alone. Other tech titans must release their own election policies for generative AI tools, unmasking any behind-the-scenes tinkering and opening internal practices to public scrutiny and knowledge. As a first step, more major tech companies—including Apple, ByteDance, Samsung, and X—should sign onto the Coalition for Content Provenance and Authenticity’s open standard for embedding content with digital markers that help prove that it is AI-generated or authentic as it travels through the online ether. (While Apple and X committed to consider attaching similar signals to their content in Munich, a consistent standard would improve detection and coordination across the board.) The markers, meanwhile, should be made more difficult to remove. And social media companies must act swiftly to address deepfakes and bot accounts that mimic human activity, as well as instituting meaningful account verification processes for election officials and bonafide news organizations to help voters find accurate information in feeds flush with misinformation, impersonators, and accounts masquerading as real news.

One company’s—or a handful of companies’ —steps to address election harms, while important, are not enough in a landscape awash with unsecured, open-source AI models where systems’ underlying code and the mathematical instructions that make them tick are publicly available, downloadable, and manipulatable. Companies as prominent as Meta and Stability AI have released unsecured AI systems, and other players have rapidly churned out many more. Anyone who wishes to interfere in elections today has a suite of AI technology to choose from to bolster their efforts, and multiple ways to deploy it. That means that governments at all levels must also take urgent action to protect voters.

Congress, agencies, and states have a plethora of options at hand to blunt AI’s risks ahead of the 2024 election. Congress and states should regulate deepfakes—particularly those spread by campaigns, political action committees, and paid influencers—by requiring disclaimers on deceptive and digitally manipulated images, video, and audio clips that could suppress votes, or that misrepresent candidates’ and election officials’ words and actions. Lawmakers should also require campaigns and political action committees to clearly label a subset of content produced by the technology underlying generative AI chatbots, particularly where politicos deploy the technology to engage in continuous conversations with voters or to deceptively impersonate humans to influence elections. Policymakers should protect voters against frivolous challenges to their voting eligibility by setting constraints on the evidence that may be used to substantiate a challenge — including evidence unearthed or created through AI.

Federal agencies should act quickly to publish guidance for certifying the authenticity and provenance of government content—as envisioned by the executive order on AI issued by the administration last year—and state and local election officials should apply similar practices to their official content. Longer term, federal and state governments should also create guidance and benchmarks that help election officials evaluate AI systems before purchasing them, checking for reliability, accuracy, biases, and transparency.

These steps are all pivotal to dealing with challenges that arise specifically from AI technology. But the election risks that AI amplifies—disinformation, vote suppression, election security hazards, and so on—long predate the advent of the generative-AI boom. To fully protect voting and elections, lawmakers must also pass reforms like the Freedom to Vote Act—a set of wide-ranging provisions that stalled in the U.S. Senate. It also means updating political ad disclosure requirements for the 21st century. And it means maintaining an expansive vision for democracy that is impervious to age-old and persistent efforts to subvert it.

Source: Tech – TIME | 10 Apr 2024 | 11:00 pm

Exclusive: Google Workers Revolt Over $1.2 Billion Contract With Israel

Hundreds of protestors gather outside Google's offices in San Francisco for Palestine

In midtown Manhattan on March 4, Google’s managing director for Israel, Barak Regev, was addressing a conference promoting the Israeli tech industry when a member of the audience stood up in protest. “I am a Google Cloud software engineer, and I refuse to build technology that powers genocide, apartheid, or surveillance,” shouted the protester, wearing an orange t-shirt emblazoned with a white Google logo. “No tech for apartheid!” 

[time-brightcove not-tgx=”true”]

The Google worker, a 23-year-old software engineer named Eddie Hatfield, was booed by the audience and quickly bundled out of the room, a video of the event shows. After a pause, Regev addressed the act of protest. “One of the privileges of working in a company which represents democratic values is giving space for different opinions,” he told the crowd.

Three days later, Google fired Hatfield.

Hatfield is part of a growing movement inside Google that is calling on the company to drop Project Nimbus, a $1.2 billion contract with Israel, jointly held with Amazon. The protest group, called No Tech for Apartheid, now has more than 200 Google employees closely involved in organizing, according to members, who say there are hundreds more workers sympathetic to their goals. TIME spoke to five current and five former Google workers for this story, many of whom described a growing sense of anger at the possibility of Google aiding Israel in its war in Gaza. Two of the former Google workers said they had resigned from Google in the last month in protest against Project Nimbus. These resignations, and Hatfield’s identity, have not previously been reported.

No Tech for Apartheid’s protest is as much about what the public doesn’t know about Project Nimbus as what it does. The contract is for Google and Amazon to provide AI and cloud computing services to the Israeli government and military, according to the Israeli finance ministry, which announced the deal in 2021. Nimbus reportedly involves Google establishing a secure instance of Google Cloud on Israeli soil, which would allow the Israeli government to perform large-scale data analysis, AI training, database hosting, and other forms of powerful computing using Google’s technology, with little oversight by the company. Google documents, first reported by the Intercept in 2022, suggest that the Google services on offer to Israel via its Cloud have capabilities such as AI-enabled facial detection, automated image categorization, and object tracking.

Further details of the contract are scarce or non-existent, and much of the workers’ frustration lies in what they say is Google’s lack of transparency about what else Project Nimbus entails and the full nature of the company’s relationship with Israel. Neither Google, nor Amazon, nor Israel, has described the specific capabilities on offer to Israel under the contract. In a statement, a Google spokesperson said: “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.” All Google Cloud customers, the spokesperson said, must abide by the company’s terms of service and acceptable use policy. That policy forbids the use of Google services to violate the legal rights of others, or engage in “violence that can cause death, serious harm, or injury.” An Amazon spokesperson said the company “is focused on making the benefits of our world-leading cloud technology available to all our customers, wherever they are located,” adding it is supporting employees affected by the war and working with humanitarian agencies. The Israeli government did not immediately respond to requests for comment.

There is no evidence Google or Amazon’s technology has been used in killings of civilians. The Google workers say they base their protests on three main sources of concern: the Israeli finance ministry’s 2021 explicit statement that Nimbus would be used by the ministry of defense; the nature of the services likely available to the Israeli government within Google’s cloud; and the apparent inability of Google to monitor what Israel might be doing with its technology. Workers worry that Google’s powerful AI and cloud computing tools could be used for surveillance, military targeting, or other forms of weaponization. Under the terms of the contract, Google and Amazon reportedly cannot prevent particular arms of the government, including the Israeli military, from using their services, and cannot cancel the contract due to public pressure.

Recent reports in the Israeli press indicate that air-strikes are being carried out with the support of an AI targeting system; it is not known which cloud provider, if any, provides the computing infrastructure likely required for such a system to run. Google workers note that for security reasons, tech companies often have very limited insight, if any, into what occurs on the sovereign cloud servers of their government clients. “We don’t have a lot of oversight into what cloud customers are doing, for understandable privacy reasons,” says Jackie Kay, a research engineer at Google’s DeepMind AI lab. “But then what assurance do we have that customers aren’t abusing this technology for military purposes?”

With new revelations continuing to trickle out about AI’s role in Israel’s bombing campaign in Gaza; the recent killings of foreign aid workers by the Israeli military; and even President Biden now urging Israel to begin an immediate ceasefire, No Tech for Apartheid’s members say their campaign is growing in strength. A previous bout of worker organizing inside Google successfully pressured the company to drop a separate Pentagon contract in 2018. Now, in a wider climate of growing international indignation at the collateral damage of Israel’s war in Gaza, many workers see Google’s firing of Hatfield as an attempt at silencing a growing threat to its business. “I think Google fired me because they saw how much traction this movement within Google is gaining,” says Hatfield, who agreed to speak on the record for the first time for this article. “I think they wanted to cause a kind of chilling effect by firing me, to make an example out of me.”


Hatfield says that his act of protest was the culmination of an internal effort, during which he questioned Google leaders about Project Nimbus but felt he was getting nowhere. “I was told by my manager that I can’t let these concerns affect my work,” he tells TIME. “Which is kind of ironic, because I see it as part of my work. I’m trying to ensure that the users of my work are safe. How can I work on what I’m being told to do, if I don’t think it’s safe?”

Three days after he disrupted the conference, Hatfield was called into a meeting with his Google manager and an HR representative, he says. He was told he had damaged the company’s public image and would be terminated with immediate effect. “This employee disrupted a coworker who was giving a presentation – interfering with an official company-sponsored event,” the Google spokesperson said in a statement to TIME. “This behavior is not okay, regardless of the issue, and the employee was terminated for violating our policies.”

Seeing Google fire Hatfield only confirmed to Vidana Abdel Khalek that she should resign from the company. On March 25, she pressed send on an email to company leaders, including CEO Sundar Pichai, announcing her decision to quit in protest over Project Nimbus. “No one came to Google to work on offensive military technology,” the former trust and safety policy employee wrote in the email, seen by TIME, which noted that over 13,000 children had been killed by Israeli attacks on Gaza since the beginning of the war; that Israel had fired upon Palestinians attempting to reach humanitarian aid shipments; and had fired upon convoys of evacuating refugees. “Through Nimbus, your organization provides cloud AI technology to this government and is thereby contributing to these horrors,” the email said.

Workers argue that Google’s relationship with Israel runs afoul of the company’s “AI principles,” which state that the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to “weapons or other technologies” whose purpose is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.” “If you are providing cloud AI technology to a government which you know is committing a genocide, and which you know is misusing this technology to harm innocent civilians, then you’re far from being neutral,” Khalek says. “If anything, you are now complicit.”


Two workers for Google DeepMind, the company’s AI division, expressed fears that the lab’s ability to prevent its AI tools being used for military purposes had been eroded, following a restructure last year. When it was acquired by Google in 2014, DeepMind reportedly signed an agreement that said its technology would never be used for military or surveillance purposes. But a series of governance changes ended with DeepMind being bound by the same AI principles that apply to Google at large. Those principles haven’t prevented Google signing lucrative military contracts with the Pentagon and Israel. “While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isn’t really our decision any more,” said one DeepMind employee who asked not to be named because they were not authorized to speak publicly. “Google DeepMind produces frontier AI models that are deployed via [Google Cloud’s Vertex AI platform] that can then be sold to public-sector and other clients.” One of those clients is Israel.

“For me to feel comfortable with contributing to an AI model that is released on [Google] Cloud, I would want there to be some accountability where usage can be revoked if, for example, it is being used for surveillance or military purposes that contravene international norms,” says Kay, the DeepMind employee. “Those principles apply to applications that DeepMind develops, but it’s ambiguous if they apply to Google’s Cloud customers.”

A Google spokesperson did not address specific questions about DeepMind for this story.

Other Google workers point to what they know about Google Cloud as a source of concern about Project Nimbus. The cloud technology that the company ordinarily offers to its clients includes a tool called AutoML that allows a user to rapidly train a machine learning model using a custom dataset. Three workers interviewed by TIME said that the Israeli government could theoretically use AutoML to build a surveillance or targeting tool. There is no evidence that Israel has used Google Cloud to build such a tool, although the New York Times recently reported that Israeli soldiers were using the freely-available facial recognition feature on Google Photos, along with other non-Google technologies, to identify suspects at checkpoints. “Providing powerful technology to an institution that has demonstrated the desire to abuse and weaponize AI for all parts of war is an unethical decision,” says Gabriel Schubiner, a former researcher at Google. “It’s a betrayal of all the engineers that are putting work into Google Cloud.”  

A Google spokesperson did not address a question asking whether AutoML was provided to Israel under Project Nimbus.

Members of No Tech for Apartheid argue it would be naive to imagine Israel is not using Google’s hardware and software for violent purposes. “If we have no oversight into how this technology is used,” says Rachel Westrick, a Google software engineer, “then the Israeli military will use it for violent means.”

“Construction of massive local cloud infrastructure within Israel’s borders, [the Israeli government] said, is basically to keep information within Israel under their strict security,” says Mohammad Khatami, a Google software engineer. “But essentially we know that means we’re giving them free rein to use our technology for whatever they want, and beyond any guidelines that we set.”

Current and former Google workers also say that they are fearful of speaking up internally against Project Nimbus or in support of Palestinians, due to what some described as fear of retaliation. “I know hundreds of people that are opposing what’s happening, but there’s this fear of losing their jobs, [or] being retaliated against,” says Khalek, the worker who resigned in protest against Project Nimbus. “People are scared.” Google’s firing of Hatfield, Khalek says, was “direct, clear retaliation… it was a message from Google that we shouldn’t be talking about this.”

The Google spokesperson denied that the company’s firing of Hatfield was an act of retaliation.

Regardless, internal dissent is growing, workers say. “What Eddie did, I think Google wants us to think it was some lone act, which is absolutely not true,” says Westrick, the Google software engineer. “The things that Eddie expressed are shared very widely in the company. People are sick of their labor being used for apartheid.”

“We’re not going to stop,” says Zelda Montes, a YouTube software engineer, of No Tech for Apartheid. “I can say definitively that this is not something that is just going to die down. It’s only going to grow stronger.”

Correction, April 10

The original version of this story misstated the number of Google staff actively involved in No Tech for Apartheid. It is more than 200, not 40.

Source: Tech – TIME | 9 Apr 2024 | 12:00 am

UK porn watchers could have faces scanned

New draft guidance sets out how porn websites and apps should stop children viewing their content.

Source: BBC News - Technology | 6 Dec 2023 | 12:04 am

GTA 6: Trailer for new game revealed after online leak

Rockstar Games releases the trailer 15 hours earlier than expected after it is leaked online.

Source: BBC News - Technology | 5 Dec 2023 | 11:57 pm

Ex-Tesla employee casts doubt on car safety

A whistleblower believes the self-driving vehicle technology is not safe enough for public roads.

Source: BBC News - Technology | 5 Dec 2023 | 9:01 pm

Booking.com users angry at firm's response to hacks

Customers say they have been failed and feel let down after losing hundreds of pounds to fraudsters.

Source: BBC News - Technology | 5 Dec 2023 | 2:21 am

Amazon, Valentino file joint lawsuit over shoes counterfeiting

Italian luxury brand Valentino and Internet giant Amazon have filed a joint lawsuit against New York-based Kaitlyn Pan Group for allegedly counterfeiting Valentino's shoes and offering them for sale online.

Source: Reuters: Technology News | 19 Jun 2020 | 2:48 am

DC superheroes coming to your headphones as Spotify signs podcast deal

Podcasts featuring Batman, Wonder Woman and Superman will soon stream on Spotify as the Swedish music streaming company has signed a deal with AT&T Inc's Warner Bros and DC Entertainment.

Source: Reuters: Technology News | 19 Jun 2020 | 2:44 am

UK ditches COVID-19 app model to use Google-Apple system

Britain on Thursday said it would switch to Apple and Google technology for its test-and-trace app, ditching its current system in a U-turn for the troubled programme.

Source: Reuters: Technology News | 19 Jun 2020 | 2:42 am

Russia lifts ban on Telegram messaging app after failing to block it

Russia on Thursday lifted a ban on the Telegram messaging app that had failed to stop the widely-used programme operating despite being in force for more than two years.

Source: Reuters: Technology News | 19 Jun 2020 | 1:54 am

Galaxy S9's new rival? OnePlus 6 will be as blazingly fast but with 256GB storage

OnePlus 6 throws down the gauntlet to Samsung's Galaxy S9, with up to 8GB of RAM and 256GB storage.

Source: Latest articles for ZDNet | 5 Apr 2018 | 12:25 am

Mozilla launches new effort to lure users back to the Firefox browser

With a revamped browser and a nonprofit mission focused on privacy and user empowerment, Mozilla is ready to strike while the iron's hot.

Source: Latest articles for ZDNet | 4 Apr 2018 | 11:00 pm

Intel: We now won't ever patch Spectre variant 2 flaw in these chips

A handful of CPU families that Intel was due to patch will now forever remain vulnerable.

Source: Latest articles for ZDNet | 4 Apr 2018 | 10:49 pm

​Cloud computing: Don't forget these factors when you make the move

Neglecting some basic issues could leave your cloud computing project struggling.

Source: Latest articles for ZDNet | 4 Apr 2018 | 10:34 pm

GDS loses government data policy to DCMS

Source: ComputerWeekly.com | 30 Mar 2018 | 5:58 pm

Europol operation nabs another 20 cyber criminals

Source: ComputerWeekly.com | 30 Mar 2018 | 6:15 am

Business unaware of scale of cyber threat

Source: ComputerWeekly.com | 30 Mar 2018 | 1:45 am

UK government secures public sector discounts on Microsoft cloud products to April 2021

Source: ComputerWeekly.com | 29 Mar 2018 | 11:58 pm

Fitbit warns over tough competition, after selling fewer devices in 2017



Source: Technology | 27 Feb 2018 | 11:38 am

Samsung Galaxy S9 and S9+: The best deals and where to buy



Source: Technology | 27 Feb 2018 | 7:09 am

Google Glass set for comeback, hardware boss hints



Source: Technology | 27 Feb 2018 | 5:47 am

Amazon plans fix for Echo speakers that expose children to explicit songs



Source: Technology | 27 Feb 2018 | 4:50 am

Driverless 'Roborace' car makes street track debut

It is a car kitted out with technology its developers boldly predict will transform our cities and change the way we live.

Source: CNN.com - Technology | 19 Nov 2016 | 9:21 am

How to outsmart fake news in your Facebook feed

Fake news is actually really easy to spot -- if you know how. Consider this your New Media Literacy Guide.

Source: CNN.com - Technology | 19 Nov 2016 | 9:21 am

Flying a sports car with wings

Piloting one of the breed of light aircraft is said to be as easy as driving a car

Source: CNN.com - Technology | 19 Nov 2016 | 9:17 am

Revealed: Winners of the 'Oscars of watches'

It's the prize giving ceremony that everyone's on time for.

Source: CNN.com - Technology | 19 Nov 2016 | 9:17 am











© 澳纽网 Ausnz.net