Technology News | Time

Google to Automatically Delete Location Data When Users Visit Abortion Clinics

MOUNTAIN VIEW, Calif. — Google will automatically purge information about users who visit abortion clinics or other places that could trigger legal problems now that the U.S. Supreme Court has opened the door for states to ban the termination of pregnancies.

The company behind the internet’s dominant internet search engine and the Android software that powers most of the world’s smartphones outlined the new privacy protections in a Friday blog post.

Read more: Anti-Abortion Pregnancy Centers Are Collecting Troves of Data That Could Be Weaponized Against Women

Besides automatically deleting visits to abortion clinics, Google also cited counseling centers, fertility centers, addiction treatment facilities, weight loss clinics, and cosmetic surgery clinics as other destinations that will be erased from users’ location histories. Users have always had the option edit their location histories on their own, but Google will proactively do it for them as an added level of protection.
[time-brightcove not-tgx=”true”]

“We’re committed to delivering robust privacy protections for people who use our products, and we will continue to look for new ways to strengthen and improve these protections,” Jen Fitzpatrick, a Google senior vice president, wrote in the blog post.

The pledge comes amid escalating pressure on Google and other Big Tech companies to do more to shield the troves of sensitive personal information through their digital services and products from government authorities and other outsiders.

Read more: Lawmakers Scramble to Reform Digital Privacy After Roe Reversal

The calls for more stringent privacy controls were triggered by the U.S. Supreme Court’s recent decision overturning the 1973 Roe v. Wade ruling that legalized abortion. That reversal could make abortion illegal in more than a dozen states, raising the specter that records about people’s location, texts, searches and emails could be used in prosecutions against abortion procedures or even for medical care sought in a miscarriage.

Like other technology companies, Google each year receives thousands of government demands for users’ digital records as part of misconduct investigations. Google says it pushes back against search warrants and other demands that are overly broad or appear to be baseless.

Source: Tech – TIME | 3 Jul 2022 | 7:15 am

Facebook Asks Judge to ‘Crack the Whip’ in Attempt to Silence a Black Whistleblower

A lawyer representing Facebook’s parent company Meta called on a judge to “crack the whip” against a Black South African whistleblower on Monday, requesting a gagging order to prevent him from speaking to the media.

The whistleblower, Daniel Motaung, was paid $2.20 per hour to be a Facebook content moderator in Kenya. He was fired by Facebook’s outsourcing partner, Sama, in 2019 after he led more than 100 of his colleagues in a unionization effort for better pay and working conditions. He suffers from post-traumatic stress disorder as a result of his work, and is now suing both Meta and Sama in a Nairobi court, alleging that he and his former colleagues are victims of forced labor, human trafficking and union-busting.
[time-brightcove not-tgx=”true”]

Motaung’s experiences at Sama were first reported by TIME in February 2022. He has since spoken about his ordeal publicly, including at a panel discussion on June 14 in London alongside another Facebook whistleblower, Frances Haugen. At a hearing in a Kenyan labor court on June 27, Sama’s lawyer Terry Mwango said that Motaung speaking to the media and in public about his experiences risked prejudicing court proceedings. Mwango requested a formal order to prevent Motaung and his lawyers from speaking about the case in public.

Read More: Inside Facebook’s African Sweatshop

Meta’s lawyer, Fred Ojiambo, seconded Mwango’s request. “Unless the petitioner and particularly his advocates are injuncted by this court from continuing to deal with this matter in this way, there will be complete and total contempt, not only of the proceedings, but of the court and the judicial officer dealing with it,” Ojiambo said.

Addressing the judge, he added: “It’s my honorable submission, lord, that your lordship crack the whip, this time around.”

In court Motaung’s lawyer Mercy Mutemi rejected the allegations that her client had breached Kenya’s sub judice rules, saying he and his representatives had refrained from discussing specifics of the case in public to comply with Kenyan law. She said Meta and Sama had not presented any evidence to show a gagging order was necessary.

The judge refused to immediately impose a gagging order, but invited Meta and Sama to bring contempt of court proceedings if they could find evidence in support.

Racial justice advocates condemned Facebook for the attempt to silence Motaung. “In a court of law, Facebook has confirmed in the most explicit way imaginable that they think Black people are property to be controlled rather than people to be respected,” said Rashad Robinson, president of the U.S.-based civil rights group Color of Change, in a statement to TIME. “Treating Black people as second-class digital citizens and high-exploitation employees is a pattern for Facebook, and their selective silencing of a Black whistleblower proves that only regulation will bring them in line with 21st century labor standards.”

Color of Change says it is calling on Meta to immediately drop its demand for a gagging order against Motaung.

“We need to make sure that Black employees suffering under Facebook’s ‘sweatshop’ labor conditions are free to blow the whistle without Facebook ‘whipping’ them into silence.” Robinson said. “We have fought too long to be silenced by any whip that Facebook selectively enforces against Black people, whether users on its platform, users of its ad services or employees of its subcontractors. We need regulation now.”

Although Facebook requires its employees to sign restrictive non-disclosure agreements, it is rare for the company to explicitly attempt to silence a whistleblower who has gone public. Haugen, the Facebook whistleblower who leaked thousands of pages of internal documents last year, and who is white, has said she has not faced similar attempts. “After I came out, I got the benefit of the race and gender issues,” she said during the panel discussion with Motaung last month. “I think it would have been very difficult for Facebook to come after me at this point because it would be a huge PR liability for them. We in our society have norms against, like, picking on women, for example. So I want to completely acknowledge my privilege.”

Read more: Inside Frances Haugen’s Decision to Take on Facebook

Meta did not respond to multiple requests for comment. Ojiambo did not respond to a request for comment. Motaung, through his lawyers, declined to comment. Mutemi declined to comment.

In an email, Sama’s chief marketing officer Suzin Wold said: “The judge in this case cautioned all the parties against commenting on the court case in any forum. Respect for [the] judge’s orders and that cases should be addressed by the court are important principles of Kenyan law that we intend to respect. Given that, we are unable to comment any further.”

In 2020, in the wake of widespread racial justice protests in the U.S., Facebook CEO Mark Zuckerberg wrote on Facebook that he believed that “Black lives matter.” He added: “I know Facebook needs to do more to support equality and safety for the Black community through our platforms.”

Neither Sama nor Facebook responded to questions asking whether they planned to proceed with formal legal requests for a gagging order against Motaung.

In June, Sama’s CEO Wendy Gonzalez appeared at a conference in Toronto where she was asked on stage about Motaung’s allegations. “We are supportive of feedback loops including everything from whistleblower [sic] in anonymous digital media all the way to physical media,” she said. “So ultimately at the end of the day, all concerns should be raised and they should be addressed very seriously.”

Source: Tech – TIME | 2 Jul 2022 | 4:57 am

Lawmakers Scramble to Reform Digital Privacy After Roe Reversal

Before Roe v. Wade granted women the constitutional right to abortion in 1973, most abortion procedures were kept hidden, even from close family members. Some women destroyed evidence and traveled in the wee hours of the morning to cover their tracks. But with today’s advances in technology, even though it’s never been easier or safer to access abortion at home, keeping it private could turn out to be much harder. The websites and apps that people use every day leave a digital footprint that’s nearly impossible to hide.

The Supreme Court’s reversal of Roe v. Wade on June 24 has directed a spotlight on the question of digital surveillance, as Google searches, location information, period-tracking apps and other personal digital data could be collected and used as evidence of a crime if one seeks to terminate a pregnancy—or helps someone do so—in states where it’s illegal. The prevalence of abortion pills, which allow people to end their pregnancies in their own homes, raises new privacy issues, as most patients must order the pills over the Internet or access a telemedicine appointment to have the medication prescribed.
[time-brightcove not-tgx=”true”]

While some lawmakers have been fighting for this issue for years, legislation that would enshrine safeguards against the collection of personal data by governments and companies for criminal surveillance and corporate profit has stalled. But the urgency has intensified in recent weeks.

“The answer can’t be just don’t use technology,” Rep. Sara Jacobs, a Democrat from California who introduced a digital privacy bill in June, tells TIME. “These are services that are very helpful to people. The answer is for us in government to do our job and put the protections in place.”

The day of the Supreme Court’s decision, Google search interest for “how to get an abortion” was more than six times higher than the previous day. Internet searches like these could turn up in criminal cases, and it’s hardly out of left field. In 2017, prosecutors used a Mississippi woman’s search history for pregnancy-terminating medication as evidence in a trial over the death of her fetus. And in 2015, prosecutors used text messages about abortion pills, exchanged between friends, to convict a woman of child neglect and feticide.

“The current privacy protections are fairly weak,” says Hayley Tsukayama, a senior legislative activist at the Electronic Frontier Foundation, which advocates for digital rights. There is no single, comprehensive federal law regulating how user data is collected, stored or shared, leaving the issue of digital privacy largely up to companies to decide, she says. Period-tracking apps, for example, which millions of people use to help track their menstrual cycles, could sell information to third parties.

“You can ask websites and apps to stop collecting your information, and you can even ask them to stop selling it,” Tsukayama says, but without a federal data privacy law in place, “You can’t really force them.”

Here’s a look at recent bills that have been introduced at the federal and state levels aimed at protecting digital privacy.

My Body, My Data Act

The My Body, My Data Act, introduced in the House on June 16 and later in the Senate, would task the Federal Trade Commission (FTC) with enforcing a national privacy standard for reproductive health data collected by apps, cell phones and search engines. It would require that companies collect and store only the health information that is strictly needed to provide their services. It would also give users the right to access or delete their personal data.

Rep. Jacobs, who introduced the bill, says digital privacy concerns are especially acute in states like Texas and Oklahoma where citizens can access up to $10,000 rewards for reporting those who violate the states’ abortion laws. “It would make it so that a small right-wing nonprofit group in Texas couldn’t just buy up or get access to this data and create a mass surveillance system,” says Jacobs, “to be able to turn people in who are seeking abortion as is incentivized in the Texas bounty law.”

Democratic Sens. Ron Wyden and Mazie Hirono, longtime proponents of digital privacy reform, introduced the bill in the upper chamber. The bills have been endorsed by Planned Parenthood, NARAL, National Abortion Federation, URGE, National Partnership for Women & Families, Feminist Majority and the Electronic Frontier Foundation.

Jacobs says there’s a “very good chance” that the Democrat-led House votes on the bill soon. “I think that people really recognize the urgency of this moment,” she says. But the chances it passes the sharply divided Senate are steeper, privacy experts tell TIME.

Stop Anti-Abortion Disinformation Act

Another bill, called the Stop Anti-Abortion Disinformation (SAD) Act, was introduced on June 23 by a group of Democrats led by Rep. Carolyn Maloney of New York and Rep. Suzanne Bonamici of Oregon, as well as Sens. Bob Menendez of New Jersey and Elizabeth Warren of Massachusetts.

It aims to crack down on misleading advertising by anti-abortion pregnancy centers, known as crisis pregnancy centers, which often style themselves as reproductive health clinics without making it clear they are faith-based organizations whose mission it is to dissuade pregnant women from having abortions.

Read More: Anti-Abortion Pregnancy Centers Are Collecting Troves of Data That Could Be Weaponized Against Women

A recent TIME investigation found that these pregnancy centers also collect vast troves of personal data on the women who come to them for help. These women often do not understand that they are providing detailed health information—including addresses, marital status, demographic information, sexual and reproductive histories, test results, ultrasound photos, and information shared during consultations—to organizations run by the anti-abortion movement. Because most pregnancy centers, which outnumber abortion clinics three to one across the country, are not licensed medical clinics and offer services for free, privacy lawyers tell TIME that they are not legally bound by federal health data privacy laws.

“By promoting deceptive or misleading advertisements about abortion services, crisis pregnancy centers jeopardize women’s health and well-being,” Sen. Menendez said in a statement. The SAD Act directs the FTC to prohibit deceptive practices by these centers, and authorizes the agency to enforce these rules and collect penalties.

Some abortion providers already began taking steps to safeguard patient information prior to the Supreme Court ruling. Many are now using paper records, making phone calls instead of texting or e-mailing, and using encrypted messaging apps.

Health and Location Data Protection Act

Yet another bill, the Health and Location Data Protection Act, introduced by Sen. Elizabeth Warren, Democrat of Massachusetts, on June 15, would ban data brokers from selling or transferring a person’s medical and sensitive personal information, with a few limited exceptions. It would also give the FTC $1 billion over 10 years to enforce these rules. “Data brokers profit from the location data of millions of people, posing serious risks to Americans everywhere by selling their most private information,” Warren said in a statement the day the legislation was introduced.

Recent reporting from Vice found that for $160, one could buy a week’s worth of data on where people who visited more than 600 Planned Parenthood clinics came from and where they went afterward. Although that data was not tied to people’s names, privacy advocates argue that such details are discoverable if an individual’s travel patterns are unique. Acquiring and selling user data, experts say, is a billion dollar industry that continues growing.

“Data collection and processing is really at the heart of a lot of business models now,” EFF’s Tsukayama says. “It’s very difficult to convince people to change that unless there are some penalties or some other mechanisms to push that change.”

Read More: America’s High-Tech Surveillance Could Track Abortion-Seekers, Too, Activists Warn

State Legislation

In some states, local lawmakers have taken matters into their own hands.

Pennsylvania state Rep. Mary Jo Daley, a Democrat, introduced legislation on May 4 that would bar pregnancy centers in the state from sharing client data without permission. She noted a recent decision by the state’s Office of Open Records that said that pregnancy centers in the state were risking client’s privacy rights by sending their data—including names and the services they received, as well as their pregnancy status, sexual history and STD information—to Real Alternatives, the state-funded network of anti-abortion pregnancy centers.

“My bill would regulate what [data] they collect and the authorizations that they would be required to have, providing information to the woman so they would know exactly what they are signing on to,” she told TIME. “On its own this is a dangerous invasion of privacy but considering recent movement to deputize private citizens into vigilantes to regulate reproductive health, the threat is becoming even more imminent.”

Right now, though, states can only do so much, says Alan Butler, executive director and president of the Electronic Privacy Information Center. Because the U.S. lacks a comprehensive set of federal digital privacy laws, women in states banning abortion are especially vulnerable.

“The states that are more likely to restrict abortion rights,” he says, “are also the states that don’t have strong privacy laws.”

Source: Tech – TIME | 2 Jul 2022 | 4:44 am

Fact-Checking 8 Claims About Crypto’s Climate Impact

Cryptocurrencies are bad for the environment—at least, that’s what most people online seem to believe. Pro-crypto posts on social media are often flooded with angry comments about the industry’s outsized contribution to greenhouse gas emissions. Studies estimate that Bitcoin mining, the process that safeguards the Bitcoin network, uses more power globally per year than most countries, including the Philippines and Venezuela.

On the other side, members of the crypto community argue that crypto mining is actually good for the environment in several crucial ways. They say that it offers a new, energy-hungry market that will encourage renewable projects. In the long run, they say, crypto will revolutionize the energy grid, and soak up excess energy that would have been otherwise wasted.
[time-brightcove not-tgx=”true”]

As lobbyists have volleyed arguments on both sides, a blow was dealt to crypto mining’s hopes for rapid expansion in the U.S. on June 30 when New York officials denied the air permits of Greenidge Generation, a Bitcoin mining operation, citing “substantial greenhouse gas (GHG) emissions associated with the project.” The decision could set a precedent for how local jurisdictions across the country approach a hotly contested topic.

So which side is correct?

To investigate, TIME spoke with several energy and environmental experts to break down some of the crypto community’s main arguments. While some experts say that there’s potential for positive impact from crypto mining, most agree there are few indications that the industry is going in the right direction.

“There is a narrow path upon which they could be useful to the energy system—but I don’t see that happening,” says Joshua Rhodes, an energy research associate at the University of Texas at Austin. And right now, he says, damage is already being done. “Writ large, they’re probably adding to carbon emissions currently.”

Claim: Crypto mining relies on renewable energy.

Bitcoin’s network relies on groups of computers, all around the world, to run complex math equations. These computing centers act less like “miners” in the literal sense and more like network watchdogs, used for security and stability. The process, known as proof of work, is energy-intensive by design, in order to prevent hacks and attacks.

Crypto advocates argue that the proof-of-work process is becoming more energy efficient: that more and more miners are turning to renewable energy sources like wind, solar, or hydropower, as opposed to coal or natural gas. However, one peer-reviewed study from earlier this year shows the opposite: that the Bitcoin network’s use of renewable energy dropped from an average of 42% in 2020 to 25% in August 2021. Researchers believe that China’s crackdown on crypto, where hydropower-driven mining operations used to be plentiful, was the primary catalyst of this decrease.

At the moment, the rate at which crypto miners use renewable energy sources is heavily disputed. The Bitcoin Mining Council, an industry group, argues that 60% of mining comes from renewable sources, which is 20 percentage points higher than the number listed by the Cambridge Center for Alternative Finance. George Kamiya, an energy analyst at the International Energy Agency, says that while the Bitcoin Mining Council likely has access to more data, its numbers come from a self-reported survey that lacks methodological details, and encouraged them to share the underlying data and methodology with outside researchers like Cambridge to increase their credibility.

Regardless of which statistic is closer to the truth, there are still many mining operations using non-green energy sources. In New York, Greenidge repurposed a coal power plant that was previously shuttered. It’s now powered by natural gas, which is also fossil-fuel-based. Yvonne Taylor, vice-president of Seneca Lake Guardian, an environmental non-profit, told TIME in April that Greenidge would emit “over a million tons of CO2 equivalents into the atmosphere every year, in addition to harmful particulate matter.”

A representative for Greenidge wrote in an email to TIME that the company has offered to reduce its greenhouse gas emissions by 40% from its currently permitted levels by 2025, and that it plans to be a “zero-carbon emitting power generation facility” by 2035. The company also plans to appeal the denial of its air permits and remain operational.

Claim: Crypto mining will lead to a renewable energy boom.

If crypto mining isn’t sustaining itself on renewables right now, might it in the future? Fred Thiel, the CEO of the crypto mining company Marathon Digital Holdings, has announced his intention to make the company fully carbon-neutral by the end of this year, and says that companies like his could have a huge impact on the future of the renewable energy industry.

It’s worth noting that many cryptocurrencies already use much less energy-intensive processes than Bitcoin’s proof of work. Smaller blockchains like Solana and Avalanche use a security mechanism called proof of stake, which Ethereum Foundation researchers claim reduces energy usage by more than 99% compared to Bitcoin’s system. Ethereum, the second largest blockchain behind Bitcoin, is in the process of switching from proof of work to proof of stake this year.

It doesn’t seem like Bitcoin will transition away from proof of work any time soon. But renewable energy developers need customers in order to grow, and proof-of-work miners provide exactly that, Thiel argues. As an example, Thiel suggested that there are wind farms in Vermont that have no ability to sell their energy because of their remote locations and the lack of transmission lines. Putting a crypto mining plant on top of the farms would theoretically give them immediate revenue. “If the goal of this country is to convert to green or sustainable energy forms for the majority of our energy use by 2050, the only way it’s going to happen is if the power generators have an incentive to build the power plants,” Thiel says.

But Thiel declined to give the name of the Vermont wind farms, and a follow-up email to a Marathon representative asking for the name of that operation or any similar ones received no response. Most experts TIME spoke with dispute the idea that there has been any sort of boom in renewables due to crypto. “I am not aware of any specific examples where a major crypto mining project directly—and additionally—boosted renewable energy production,” Kamiya wrote.

“The proof is in the pudding–and I have not seen that play out in the state of Montana,” says Missoula County Commissioner Dave Strohmaier, whose county hosted energy-intensive mining operations that rankled local communities, leading the local government to restrict miners’ ability to set up new operations.

Joshua Rhodes says that counties in Texas were ”chock-full of renewable projects getting built and turning on” even before the Bitcoin mining rush. He also argues that even if crypto did spur a renewables boom, it might not even help the right places. While wind and solar energy is plentiful in West Texas, for example, it requires extensive infrastructure and transmission lines to run that power back east to the cities that desperately need it, like Houston and Dallas. “All of the cheap electricity can’t get out,” he says.

And even if it were true that crypto mining is creating rapidly accelerating demand for solar and wind farms—which, again, doesn’t seem to yet be the case—there’s the problem of where to put them. Many communities or organizations have opposed them on various grounds ranging from aesthetic to conservational. In New York, Assemblymember Anna Kelles—who spearheaded a bill to impose a moratorium on crypto mining in the state—says that a crypto-driven influx of solar and wind operations would be “directly competing with farmland in New York State at a time when it’s becoming more and more the breadbasket of the country because of climate change.

With major resistance and long timetables to erect wind and solar projects, impatient crypto miners are more likely to set up shop using other, less clean forms of energy. In Kentucky, abandoned coal mines are being repurposed into crypto mining centers.

Claim: Crypto miners improve electricity grids

If crypto companies aren’t yet supercharging a renewables boom, then maybe they’re helping other ways, like making our electricity grids more resilient. Thiel argues that crypto miners are uniquely suited to help grids for several reasons: that they can be turned off quickly during peak hours of energy usage in a way that, say, pasteurization machines can’t; that they can soak up energy from the grid that would be otherwise wasted; that they can be located very close to sources of energy.

“We voluntarily curtail whenever the grid needs the energy,” Thiel says. “It acts as this ideal buffer for the grid.” During peak stretches of Texas’s energy usage, Thiel says, Marathon has lowered or completely shut off their usage of the grid for two to three hours a day.

Flexible energy loads are, in fact, good for the grid, Rhodes wrote in a study last year.

He found that if crypto miners were willing to curtail their energy usage during peak times so that their annual load is slashed by 13-15%, then their enterprises would help reduce carbon emissions, improve grid resiliency under high-stress periods, and also help foster the shift to renewables.

But Rhodes and others are skeptical that most miners will be willing to operate on someone else’s schedule. Crypto miners have shown that in order to maximize their profits, they would much rather operate 24/7. Strohmaier, in Montana, says that when he met with crypto miners operating in his county about their activity, the topics of grid resilience or curtailment “never came up once. We never got the sense there was any willingness to scale back even for a nanosecond of what they were doing. It was all, ‘We have to keep every one of these machines running—and add more if we are able to remain viable,’” he says.

Thiel says that when there isn’t enough energy from the wind farms to power Marathon’s plants—as wind doesn’t blow all the time—the company then supplements it partially with natural gas from the grid. When asked for a breakdown of Marathon’s energy usage, a representative wrote in an email, “We’re still in the process of installing miners in Texas. It’s hard to estimate what the ultimate mix will be.”

Claim: Crypto miners are simply using energy that would have gone to waste.

Plenty of electricity gets wasted in the U.S., and crypto miners are hoping to take advantage of it. The process of oil extraction, for example, produces a natural gas byproduct that many companies simply choose to flare (burn off and waste) rather than building the infrastructure to capture it. But in North Dakota, crypto miners signed a deal with Exxon to set up shop directly on site and use gas that would have been flared for new mining operations instead.

Some experts say this process could still be severely damaging. “I don’t see that as a benefit: They’re still burning the gas,” says Anthony Ingraffea, a civil and environmental engineering professor at Cornell University, who co-wrote a paper in 2011 on the environmental hazards of extracting natural gas.

Further, Ingraffea argues, by giving Exxon extra business at their oil drilling sites, crypto mining theoretically incentivizes the fossil fuel industry to keep investing in oil extraction. Kamiya contends that there are other productive uses for flared gas, including producing electricity to be sold back to the grid, but that crypto mining “could disincentivize the operator from finding other uses and markets for its gas that can drive higher emission reductions.”

And crypto miners are running into problems even in ideal energy circumstances. A paper released this month from the Coinbase Institute contends that in Iceland, a “new gold rush” of mining activity has led to minimal environmental impacts due to the country’s “abundant geothermal energy.” But in December, the country experienced a severe electricity shortage, causing its main utility provider to announce they would reject all future crypto mining power requests.

Claim: Some crypto mining operations are already carbon neutral.

Last year, Greenidge Generation, the crypto mining facility in New York, tried to quell criticisms about its environmental impact by announcing its intention to become carbon neutral. In a press release, the company said it would purchase carbon offsets and invest in renewable energy projects to account for its gas-based emissions.

Replacing fossil-fuel-based energy with renewable energy is certain to be an environmental good. But carbon offsets are not as clear-cut. The offset industry has come under fire from many scientists who say that many such projects are poorly defined and not as helpful as they seem—that it’s common for projects that have no positive environmental impact to be rewarded on technicalities. Offsets essentially allow companies to pay to continue polluting. Greenpeace even called the entire system “​​a distraction from the real solutions to climate change.”

Carbon offsets “do not reduce global emissions, they just move them around the globe,” Ingraffea says. He argues that they should only be used in the case of emissions that are impossible to reduce.

Read more: The Crypto Industry Was On Its Way to Changing the Carbon-Credit Market, Until It Hit a Major Roadblock

Claim: Data centers are just as bad for pollution as crypto mining operations.

Many crypto miners feel unfairly targeted about their environmental impact, believing that data centers, which receive far less scrutiny, are just as responsible for increasing carbon emissions.

Multiple experts disagree. “Crypto mining consumes about twice as much electricity as Amazon, Google, Microsoft, Facebook, and Apple combined,” says Kamiya.

Jonathan Koomey, a researcher who has been studying information technology and energy use for more than 30 years, says that the two categories of machines are moving in opposite directions in terms of efficiency. A 2020 study he co-wrote found that while the computing abilities and output of regular data centers had grown vastly between 2010 and 2018, its electricity use barely increased at all. Meanwhile, in Bitcoin mining, “there’s a structural incentive for the entire system to get less efficient over time,” he says. He’s referring to the fact that, generally, Bitcoin miners are forced to solve harder and harder puzzles over time to keep the blockchain functioning—and the computing power to work through those tasks requires increasing amounts of energy.

Claim: Christmas lights use more electricity than Bitcoin.

This claim has been repeated over and over by Bitcoin mining defenders, including Thiel in our interview, in order to deflect attention from Bitcoin mining and onto other large uses of electricity. It’s also completely unsubstantiated. The latest major study on holiday lights came from a paper written in 2008, which put their electricity consumption in the U.S. at 6.63 terawatt hours of electricity per year. (The paper noted that figure would only decrease as LED bulbs became more common). The Bitcoin network, by comparison, consumes an estimated 91 terawatt hours yearly.

Popular online posts on this topic that defend Bitcoin, including from the digital mining operator Mawson, either do not cite any sources for their data or mangle the findings of trusted institutions.

Claim: Bitcoin’s value added to society will make it all worth it.

Koomey and other experts say that over the last decade there’s only been one surefire reason crypto mining’s environmental impact can sometimes fall: when cryptocurrency prices go down. During these drops, miners are disincentivized to stay in the market or buy new equipment, and some close up shop, leading to fewer greenhouse-gas emissions. Indeed, as Bitcoin’s value fell from $40,000 to $20,000 from late April to June, industry power usage also dropped by a third according to the Cambridge Bitcoin Electricity Consumption Index.

So why should the U.S. allow crypto miners to go on, if they’re harming the environment? Crypto enthusiasts argue that the long-term societal and economic benefits of their industry will offset its electricity usage, just as the computer revolution did before it.

Koomey says that when weighing the possible environmental impacts of crypto, it’s important to take a wide-lens approach: to think about what crypto might add to society overall compared to other energy guzzlers.

“Sure, Google uses a measurable amount of electricity—but I would argue that’s a pretty good use of that electricity,” he says. “So you have to come back to this question for the crypto people, aside from just how much electricity they use: What business value are you delivering? How does this technology perform a function better than the technology that it replaces? Is it worth it?”

 

Source: Tech – TIME | 2 Jul 2022 | 1:57 am

We’re Dangerously Close to Giving Big Tech Control Of Our Thoughts

Elon Musk has proclaimed himself to be a “free speech absolutist” though reports of the way employees of his companies have been treated when exercising their free speech rights to criticise him might indicate that his commitment to free speech has its limits. But as Musk’s bid to takeover Twitter progresses in fits and starts, the potential for anyone to access and control billions of opinions around the world for the right sum should focus all our minds on the need to protect an almost forgotten right—the right to freedom of thought.

In 1942 the U.S. Supreme Court wrote “Freedom to think is absolute of its own nature, the most tyrannical government is powerless to control the inward workings of the mind.” The assumption that getting inside our heads is a practical impossibility may have prevented lawyers and legislators from dwelling too much on putting in place regulation that protects our inner lives. But it has not stopped powerful people trying to access and control our minds for centuries.
[time-brightcove not-tgx=”true”]

At his trial for War Crimes in Nuremberg after the Second World War, Albert Speer, Hitler’s former Minister of Armaments, explaining the power of the Nazi’s propaganda machine said: “Through technical devices such as radio and loudspeaker 80 million people were deprived of independent thought. It was thereby possible to subject them to the will of one man…. Today the danger of being terrorized by technocracy threatens every country in the world.”

When whole communities are deprived of independent thought, it undermines their individual rights to freedom of thought and opinion. But it is not only a threat to the rights of the people who are manipulated. As the world saw with Nazi Germany, it becomes a threat to all our rights. Tragically, Speer’s warning is acutely resonant in the 21st-century as technology has been harnessed as an even more efficient tool for manipulation and control of the minds of populations with devastating consequences.

Facebook’s role in facilitating genocide in Myanmar, a country where the platform turned into a “beast” according to UN factfinders was a wake up call to the potential for social media profiling and targeting to twist people’s minds inciting deadly violence. Facebook committed to doing better, but as the events that played out across the world’s screens from the U.S. Capitol on 6 January last year and the ongoing ethnic violence in Ethiopia today show, their approach is hopelessly inadequate.

It is not just because of the enormous challenge of content moderation on a global platform, the business model of social media based on surveillance advertising facilitates the hijacking of user engagement to deadly effect. An investigation by the campaign group Global Witness last year showed that they were able to get inflammatory adverts approved on Facebook to target individuals in Northern Ireland across the sectarian divide at a time of heightened tensions. When propaganda can be automated, curated and targeted to reach billions worldwide for profit, it is an existential threat to humanity and one that none of us can afford to ignore.

It may have been Facebook in the frame for these incidences of violence, but a global platform like Twitter could be an equally powerful tool for manipulation on a massive scale as Donald Trump well knew. Twitter is valuable, not because of what you can say on the platform, but because of the billions of opinions you can control through the curation of individual news feeds.

Increasingly the purpose of technological innovations, whether in the online environment or using big data, artificial intelligence or neuroscience is, precisely, to access and control the inner workings of our minds. That is where the money is. Inferences drawn from huge pools of data about us are used to decide if we are susceptible to gambling or prone to conspiracy theories, if we are anxious or over-confident so that our vulnerabilities can be managed, sold and exploited. Our minds are a valuable commodity in both the commercial and the political spheres.

Technology is being developed not only to predict our political leanings from our faces, but also to identify our individual psychological buttons and to press them in ways that might make it more or less likely that you’ll get off the couch and go out and vote. The rise of “neuropolitics” and the use of political behavioural psychography in electoral processes around the world is problematic because it undermines the foundations of democracy, no matter who is paying for it or which way you vote.

The use of these tactics spans ideological divides but we when we are talking about mass mind control, the implications are so profound and devastating that the ends can never justify the means. Politics is always about influence and persuasion, but democracy relies on the votes of individuals who have formed their opinions free from manipulation. At a time when many of us get most, or all of our information online, we need to make sure that our minds cannot be easily hijacked and sold to the highest bidder. The Washington DC Attorney General’s recent move to sue Mark Zuckerberg personally for Facebook’s role in the Cambridge Analytica scandal may mark a shift in the right direction. Free flows of information allow us to form our opinions freely, we cannot afford to make one person the global gatekeeper to our minds, no matter how keen they may seem to be on freedom.

It’s not just about social media. Technologists have ambitions beyond the screens we stare at and the enthralling devices we have glued to our palms. Elon Musk’s Neuralink has its sights set on nothing less than direct access to our brains. It claims to be “designing the first neural implant that will let you control a computer or mobile device anywhere you go.” Brain-Computer Interfaces (BCI’s) won’t just control the way we receive information, they will control how our minds meet the world.

When Facebook announced its own project to develop non-invasive BCI in 2017, it promised that it would only read the thoughts you wanted it to. In 2021, it shelved the project. Perhaps it realised direct access to the human mind was just a bridge to far. Reading our minds in real time is only a part of it, in 2019, neuroscientists published an experiment where, by implanting electrodes into the brains of mice they could make the animals see things that were simply not there. If you worry about deepfakes, imagine how much more dangerous they could be if they were not on your screen but actually in your head.

While brain implants for manipulative marketing or mind-reading may not sound too appealing, when dressed up as health tech, these tools suddenly become saleable. Musk has reportedly claimed that Neuralink could help control hormone levels and mood to our advantage. But if your hormone levels can be changed to alter your brain to your advantage, they may also be altered to the advantage of someone else if that was more profitable or politically expedient. And imagine if you had to upgrade your brain as often as your phone to deal with those tricky built-in downgrades depleting your battery and clogging up your memory just that bit faster with every new model….

Whether we are looking at the global management of information flows or the tiny threads of Neuralink’s brain-computer-interface pushing through our skulls, we need to wake up to the fundamental threat of systems that allow direct access to our minds en masse.

While freedom of speech can be limited in certain circumstances, the right to freedom of thought is absolute in international human rights law. It means that we have the right to keep our thoughts private, not to be penalized for our thoughts alone, and to keep our inner lives free from manipulation. We can no longer rely on the Supreme Court’s outdated assumption that no one can control the inward workings of our mind. Whether or not the claims are overblown, technology is already trading on its potential to do just that.

We need freedom of thought to combat climate change, racism and global poverty, and to fall in love, laugh and dream. It is crucial to the cultural, scientific, political and emotional life in our societies. Once we lose it we may never get it back. Musk’s takeover of Twitter combined with his ambitions for Neuralink make the threat of big tech to our freedom of thought impossible to ignore.

We need serious regulation to check the systems that want get inside our heads and we need it now. Tackling the business models underpinning social media and banning the use of BCI’s as consumer products before they become a mass-market reality would be first, but urgent steps. We must learn from history the dangers of letting one man control the minds of millions or billions of people and we must be prepared to say no before it is too late.

It’s not about Elon Musk, it’s about anyone having that kind of access to your mind. Our future should not be built on the best way to monetise the global population and obtain world domination for the few. It must be grounded in what it means to be human, and for that, we must have the freedom to think.

Adapted from Alegre’s new book Freedom to Think: The Long Struggle to Liberate our Minds” by Susie Alegre, published by Atlantic Books

Source: Tech – TIME | 30 Jun 2022 | 4:19 am

Company Buying Trump’s Social Media App Faces Subpoenas

The company planning to buy Donald Trump’s new social media business has disclosed a federal grand jury investigation that it says could impede or even prevent its acquisition of the Truth Social app.

Shares of Digital World Acquisition Corp. dropped 10% in morning trading Monday as the company revealed that it has received subpoenas from a grand jury in New York.

The Justice Department subpoenas follow an ongoing probe by the Securities and Exchange Commission into whether Digital World broke rules by having substantial talks about buying Trump’s company starting early last year before Digital World sold stock to the public for the first time in September, just weeks before its announcement that it would be buying Trump’s company.
[time-brightcove not-tgx=”true”]

Read More: What to Know About Digital World, the Company Funding Trump’s New Social Media Platform ‘TRUTH Social’

Trump’s social media venture launched in February as he seeks a new digital stage to rally his supporters and fight Big Tech limits on speech, a year after he was banned from Twitter, Facebook and YouTube.

The Trump Media & Technology Group — which operates the Truth Social app and was in the process of being acquired by Digital World — said in a statement that it will cooperate with “oversight that supports the SEC’s important mission of protecting retail investors.”

The new probe could make it more difficult for Trump to finance his social media company. The company last year got promises from dozens of investors to pump $1 billion into the company, but it can’t get the cash until the Digital World acquisition is completed.

Stock in Digital World rocketed to more than $100 in October after its deal to buy Trump’s company was announced. The stock traded at just around $25 in morning trading Monday.

Digital World is a special-purpose acquisition company, or SPAC, part of an investing phenomenon that exploded in popularity over the past two years.

Such “blank-check” companies are empty corporate entities with no operations, only offering investors the promise they will buy a business in the future. As such they are allowed to sell stock to the public quickly without the usual regulatory disclosures and delays, but only if they haven’t already lined up possible acquisition targets.

Read More: What’s Allowed on Trump’s New ‘TRUTH’ Social Media Platform—And What Isn’t

Digital World said in a regulatory filing Monday that each member of its board of directors has been subpoenaed by the grand jury in the Southern District of New York. Both the grand jury and the SEC are also seeking a number of documents tied to the company and others including a sponsor, ARC Global Investments, and Miami-based venture capital firm Rocket One Capital.

Some of the sought documents involve “due diligence” regarding Trump Media and other potential acquisition targets, as well as communications with Digital World’s underwriter and financial adviser in its initial public offering, according to the SEC disclosure.

Digital World also Monday announced the resignation of one of its board members, Bruce Garelick, a chief strategy officer at Rocket One.

Source: Tech – TIME | 28 Jun 2022 | 7:01 am

Ads Are Officially Coming to Netflix. Here’s What That Means for You

After years of resisting advertisements on its streaming platform, Netflix is introducing commercials to its service.

Netflix co-CEO Ted Sarandos confirmed on Thursday that the company would begin testing an ad-supported, lower-priced subscription tier. The streaming company is speaking to multiple potential partners to help ease its entrance into the ad world, Sarandos said while speaking at the international ad festival Cannes Lions. Those partners reportedly include Comcast, NBCUniversal, and Google.

Sarandos’ confirmation comes in the midst of a rough year for Netflix. As competition among entertainment services grows more intense, the streaming giant lost subscribers for the first time in a decade, faced a backlash for cracking down on password sharing, and laid off over 150 employees (or about 1.5% of its global workforce).
[time-brightcove not-tgx=”true”]

“We’ve left a big customer segment off the table, which is people who say, ‘Hey, Netflix is too expensive for me and I don’t mind advertising,’” Sarandos said. “We’re adding an ad-tier. We’re not adding ads to Netflix as you know it today.”

Netflix co-CEO Reed Hastings had telegraphed the advertising plan, suggesting on a first-quarter earnings call in April that ads could be on the way in the next year or two. “Those who have followed Netflix know that I’ve been against the complexity of advertising and a big fan of the simplicity of subscription. But as much as I’m a fan of that, I’m a bigger fan of consumer choice,” he said. “And allowing consumers who like to have a lower price, and are advertising tolerant, to get what they want makes a lot of sense.”

Then, the New York Times reported in May that Netflix had told its employees an ad-based plan could launch by the end of the year, sooner than previously expected.

Netflix lost 200,000 subscribers in the first three months of 2022 and forecasted greater losses to come in an April shareholder letter. The company’s stock price has plunged more than 70% this year (compared with the S&P 500’s 21% decline), wiping out roughly $70 billion of its market capitalization and prompting shareholders to file a lawsuit alleging that Netflix misled investors about declining subscriber growth.

Now, the hope at Netflix is to generate more revenue by embracing ads. And it’s not alone. Competitors like Hulu and HBO Max already offer ad-based plans that are cheaper than their commercial-free services, while Disney+ announced in March that it would be rolling out an ad-supported subscription tier in late 2022.

With Netflix’s current monthly subscription model, subscribers in the U.S. can use their account on one, two, or four screens at once and prices reflect the number of screens available—ranging between $9.99 and $19.99. The new ad-supported tier will create a lower-priced option for subscribers who are willing to watch commercials in exchange for paying a little less.

Source: Tech – TIME | 24 Jun 2022 | 2:56 am

Fun AI Apps Are Everywhere Right Now. But a Safety ‘Reckoning’ Is Coming

If you’ve spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar Binks at the Nuremberg Trials, or a courtroom sketch of Snoop Dogg being sued by Snoopy.

These surreal creations are the products of Dall-E Mini, a popular web app that creates images on demand. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you’ve asked for.

More than 200,000 people are now using Dall-E Mini every day, its creator says—a number that is only growing. A Twitter account called “Weird Dall-E Generations,” created in February, has more than 890,000 followers at the time of publication. One of its most popular tweets so far is a response to the prompt “CCTV footage of Jesus Christ stealing [a] bike.”
[time-brightcove not-tgx=”true”]

pic.twitter.com/8OqcI5FSjW

— Weird Dall-E Mini Generations (@weirddalle) June 14, 2022

If Dall-E Mini seems revolutionary, it’s only a crude imitation of what’s possible with more powerful tools. As the “Mini” in its name suggests, the tool is effectively a copycat version of Dall-E—a much more powerful text-to-image tool created by one of the most advanced artificial intelligence labs in the world.

That lab, OpenAI, boasts online of (the real) Dall-E’s ability to generate photorealistic images. But OpenAI has not released Dall-E for public use, due to what it says are concerns that it “could be used to generate a wide range of deceptive and otherwise harmful content.” It’s not the only image-generation tool that’s been locked behind closed doors by its creator. Google is keeping its own similarly powerful image-generation tool, called Imagen, restricted while it studies the tool’s risks and limitations.

The risks of text-to-image tools, Google and OpenAI both say, include the potential to turbocharge bullying and harassment; to generate images that reproduce racism or gender stereotypes; and to spread misinformation. They could even reduce public trust in genuine photographs that depict reality.

Text could be even more challenging than images. OpenAI and Google have both also developed their own synthetic text generators that chatbots can be based on, which they have also chosen to not release widely to the public amid fears that they could be used to manufacture misinformation or facilitate bullying.

Read more: How AI Will Completely Change the Way We Live in the Next 20 Years

Google and OpenAI have long described themselves as committed to the safe development of AI, pointing to, among other things, their decisions to keep these potentially dangerous tools restricted to a select group of users, at least for now. But that hasn’t stopped them from publicly hyping the tools, announcing their capabilities, and describing how they made them. That has inspired a wave of copycats with fewer ethical hangups. Increasingly, tools pioneered inside Google and OpenAI have been imitated by knockoff apps that are circulating ever more widely online, and contributing to a growing sense that the public internet is on the brink of a revolution.

“Platforms are making it easier for people to create and share different types of technology without needing to have any strong background in computer science,” says Margaret Mitchell, a computer scientist and a former co-lead of Google’s Ethical Artificial Intelligence team. “By the end of 2022, the general public’s understanding of this technology and everything that can be done with it will fundamentally shift.”

The copycat effect

The rise of Dall-E Mini is just one example of the “copycat effect”—a term used by defense analysts to understand the way adversaries take inspiration from one another in military research and development. “The copycat effect is when you see a capability demonstrated, and it lets you know, oh, that’s possible,” says Trey Herr, the director of the Atlantic Council’s cyber statecraft initiative. “What we’re seeing with Dall-E Mini right now is that it’s possible to recreate a system that can output these things based on what we know Dall-E is capable of. It significantly reduces the uncertainty. And so if I have resources and the technical chops to try and train a system in that direction, I know I could get there.”

That’s exactly what happened with Boris Dayma, a machine learning researcher based in Houston, Texas. When he saw OpenAI’s descriptions online of what Dall-E could do, he was inspired to create Dall-E Mini. “I was like, oh, that’s super cool,” Dayma told TIME. “I wanted to do the same.”

“The big groups like Google and OpenAI have to show that they are on the forefront of AI, so they will talk about what they can do as fast as they can,” Dayma says. “[OpenAI] published a paper that had a lot of very interesting details on how they made [Dall-E]. They didn’t give the code, but they gave a lot of critical elements. I wouldn’t have been able to develop my program without the paper they published.”

In June, Dall-E Mini’s creators said the tool would be changing its name to Craiyon, in response to what they said was a request from OpenAI “to avoid confusion.”

Advocates of restraint, like Mitchell, say it’s inevitable that accessible image- and text-generation tools will open up a world of creative opportunity, but also a Pandora’s box of awful applications—like depicting people in compromising situations, or creating armies of hate-speech bots to relentlessly bully vulnerable people online.

Read more: An Artificial Intelligence Helped Write This Play. It May Contain Racism

But Dayma says he is confident that the dangers of Dall-E Mini are negligible, since the images it generates are nowhere near photorealistic. “In a way it’s a big advantage,” he says. “I can let people discover that technology while still not posing a risk.”

Some other copycat projects come with even more risks. In June, a program named GPT-4chan emerged. It was a text-generator, or chatbot, that had been trained on text from 4chan, a forum notorious for being a hotbed of racism, sexism and homophobia. Every new sentence it generated sounded similarly toxic.

Just like Dall-E Mini, the tool was created by an independent programmer but was inspired by research at OpenAI. Its name, GPT-4chan, was a nod to GPT-3, OpenAI’s flagship text-generator. Unlike the copycat version, GPT-3 was trained on text scraped from large swathes of the internet, and its creator, OpenAI, has only been granting access to GPT-3 to select users.

A new frontier for online safety

In June, after GPT-4chan’s racist and vitriolic text outputs attracted widespread criticism online, the app was removed from Hugging Face, the website that hosted it, for violating its terms and conditions.

Hugging Face makes machine learning-based apps accessible through a web browser. The platform has become the go-to location for open source AI apps, including Dall-E Mini.

Clement Delangue, the CEO of Hugging Face, told TIME that his business is booming, and heralded what he said was a new era of computing with more and more tech companies realizing the possibilities that could be unlocked by pivoting to machine learning.

But the controversy over GPT-4chan was also a signal of a new, emerging challenge in the world of online safety. Social media, the last online revolution, made billionaires out of platforms’ CEOs, and also put them in the position of deciding what content is (and is not) acceptable online. Questionable decisions have tarnished those CEOs’ once glossy reputations. Now, smaller machine learning platforms like Hugging Face, with far fewer resources, are becoming a new kind of gatekeeper. As open-source machine learning tools like Dall-E and GPT-4chan proliferate online, it will be up to their hosts, platforms like Hugging Face, to set the limits of what is acceptable.

Delangue says this gatekeeping role is a challenge that Hugging Face is ready for. “We’re super excited because we think there is a lot of potential to have a positive impact on the world,” he says. “But that means not making the mistakes that a lot of the older players made, like the social networks – meaning thinking that technology is value neutral, and removing yourself from the ethical discussions.”

Still, like the early approach of social media CEOs, Delangue hints at a preference for light-touch content moderation. He says the site’s policy is currently to politely ask creators to fix their models, and will only remove them entirely as an “extreme” last resort.

But Hugging Face is also encouraging its creators to be transparent about their tools’ limitations and biases, informed by the latest research into AI harms. Mitchell, the former Google AI ethicist, now works at Hugging Face focusing on these issues. She’s helping the platform envision what a new content moderation paradigm for machine learning might look like.

“There’s an art there, obviously, as you try to balance open source and all these ideas around public sharing of really powerful technology, with what malicious actors can do and what misuse looks like,” says Mitchell, speaking in her capacity as an independent machine learning researcher rather than as a Hugging Face employee. She adds that part of her role is to “shape AI in a way that the worst actors, and the easily-foreseeable terrible scenarios, don’t end up happening.”

Mitchell imagines a worst-case scenario where a group of schoolchildren train a text-generator like GPT-4chan to bully a classmate via their texts, direct messages, and on Twitter, Facebook, and WhatsApp, to the point where the victim decides to end their own life. “There’s going to be a reckoning,” Mitchell says. “We know something like this is going to happen. It’s foreseeable. But there’s such a breathless fandom around AI and modern technologies that really sidesteps the serious issues that are going to emerge and are already emerging.”

The dangers of AI hype

That “breathless fandom” was encapsulated in yet another AI project that caused controversy this month. In early June, Google engineer Blake Lemoine claimed that one of the company’s chatbots, called LaMDA, based on the company’s synthetic-text generation software, had become sentient. Google rejected his claims and placed him on administrative leave. Around the same time, Ilya Sutskever, a senior executive at OpenAI suggested on Twitter that computer brains were beginning to mimic human ones. “Psychology should become more and more applicable to AI as it gets smarter,” he said.

In a statement, Google spokesperson Brian Gabriel said the company was “taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.” OpenAI declined to comment.

For some experts, the discussion over LaMDA’s supposed sentience was a distraction—at the worst possible time. Instead of arguing over whether the chatbot had feelings, they argued, AI’s most influential players should be rushing to educate people about the potential for such technology to do harm.

“This could be a moment to better educate the public as to what this technology is actually doing,” says Emily Bender, a linguistics professor at the University of Washington who studies machine learning technologies. “Or it could be a moment where more and more people get taken in, and go with the hype.” Bender adds that even the term “artificial intelligence” is a misnomer, because it is being used to describe technologies that are nowhere near “intelligent”—or indeed conscious.

Still, Bender says that image-generators like Dall-E Mini may have the capacity to teach the public about the limits of AI. It’s easier to fool people with a chatbot, because humans tend to look for meaning in language, no matter where it comes from, she says. Our eyes are harder to trick. The images Dall-E Mini churns out look weird and glitchy, and are certainly nowhere near photorealistic. “I don’t think anybody who is playing with Dall-E Mini believes that these images are actually a thing in the world that exists,” Bender says.

Despite the AI hype that big companies are stirring up, crude tools like Dall-E Mini show how far the technology has to go. When you type in “CEO,” Dall-E Mini spits out nine images of a white man in a suit. When you type in “woman,” the images all depict white women. The results reflect the biases in the data that both Dall-E Mini and OpenAI’s Dall-E were trained on: images scraped from the internet. That inevitably includes racist, sexist and other problematic stereotypes, as well as large quantities of porn and violence. Even when researchers painstakingly filter out the worst content, (as both Dayma and OpenAI say they have done,) more subtle biases inevitably remain.

Read more: Why Timnit Gebru Isn’t Waiting for Big Tech to Fix AI’s Problems

While the AI technology is impressive, these kinds of basic shortcomings still plague many areas of machine learning. And they are a central reason that Google and OpenAI are declining to release their image and text-generation tools publicly. “The big AI labs have a responsibility to cut it out with the hype and be very clear about what they’ve actually built,” Bender says. “And I’m seeing the opposite.”

Source: Tech – TIME | 23 Jun 2022 | 10:30 pm

Meta Says UK Bill Risks Messages Being Surveilled And Censored

Meta Platforms Inc. said UK online safety legislation “risks people’s private messages being constantly surveilled and censored” unless it’s changed, adding to a long list of complaints recently lodged against the proposed law.

The sweeping Online Safety Bill is winding its way through Parliament and it’s intended to come into force next year. The government has estimated it will apply to more than 25,000 services.

The bill still faces possible amendments, but a draft pushes the very biggest social media and search engines to help people avoid “legal but harmful content” on their so-called user-to-user services, Meta said.
[time-brightcove not-tgx=”true”]

Read More: Meta Is One of The 2022 TIME100 Most Influential Companies

That doesn’t distinguish between messaging and public social media, and could imply “scanning all private messaging,” WhatsApp and Facebook owner Meta argued in written evidence published Wednesday, adding to a list of concerns and proposed amendments published since the draft bill’s publication in March.

“Tech firms have failed to tackle child abuse and end-to-end encryption could blind them to it on their sites while hampering efforts to catch the perpetrators,” a spokesman for the Department for Digital, Culture, Media and Sport said by email. “As a last resort Ofcom has the power to make private messaging apps use technology to identify child sexual abuse material – this can only be used when proportionate and with strict legal privacy safeguards in place.”

The range of submissions reflects the proposed law’s breadth and complexity. Lawmakers have received letters from Silicon Valley giants, news publishers, religious groups, broadcasters, insurers, the LEGO Group, e-cigarette manufacturer Juul Labs, dating app Bumble, animal rights advocates, and more.

Read More: Frances Haugen Calls for ‘Solidarity’ With Facebook Content Moderators in Conversation with Whistleblower Daniel Motaung

Meta has also received its own criticism around perceived censorship, misinformation, and controversies over training politicians who later used Facebook to spread propaganda. A representative for Meta declined to comment on these stories, but said they support the introduction of regulations.

Within the submissions, Twitter Inc. said it’s concerned about issues in the bill around freedom of expression as well as the precedent that could be set by the bill’s threat of criminal sanctions. It added that exemptions around journalism should be removed due to the risk of exploitation by “bad faith actors”.

Alphabet Inc.’s Google said the bill appears to incentivize “automated general monitoring, and over-removal, of content.”

(Updates with DCMS comment)

Source: Tech – TIME | 23 Jun 2022 | 6:52 am

Mark Zuckerberg Wants to Rule the Future of the Internet. Chuck Schumer Must Stop Him

John Oliver’s masterful explanation of tech monopolies—and how Senate Majority Leader Chuck Schumer (D-NY) is blocking legislation to reign them in—is must-watch television for anyone who cares about innovation and competition in America. He explained in detail how Apple, Google, and Amazon use their core products and their market share to crush competitors and enter new businesses while taking advantage of everyday Americans.

It is easy for anyone to see that it is unfair when Amazon copies a bag that becomes popular on its website and offers its own, identical version through Amazon Basics. That’s one example of “self-preferencing,” the term used to describe the behavior of monopolists who leverage their control of a marketplace to hurt players in that market, that Mr. Oliver expertly described.
[time-brightcove not-tgx=”true”]

But Oliver didn’t talk about Meta, which is arguably the biggest winner if Senator Chuck Schumer chooses not to lift his de facto hold on the American Innovation and Choice Online Act (AICO), a bipartisan reform legislation currently in the Senate that would limit the ability of marketplace owners to exploit their power over market participants. For example. Amazon has been accused of copying best sellers in its marketplace and then using it control of the platform to steer traffic to its copy.

Meta’s platforms—Facebook, Instagram, and WhatsApp—operate marketplaces that appear at first glance to be different from those of Google, Amazon, and Apple. So why is Meta putting unprecedented resources into killing the AICO, which appears to be targeted at the other Big Tech monopolies? Because just like the other Big Tech monopolists, Meta employs self-preferencing to undermine the viability of smaller companies that are dependent on it.

AICO threatens to unravel some of Meta’s less recognized anti-competitive practices. For example, Facebook blocks users on its chat platforms from communicating with users on other chat platforms. One chat start-up CEO says there’s “very little innovation in chat” because companies like Facebook “lock people into using their product.” You don’t need to take that CEO’s word for it. Facebook actually cut off another messaging service’s access to its infrastructure, fearing it might become a competitor.

The deeper reason Meta is fighting AICO tooth and nail is this: AICO would be a barrier to Meta implementing its long term goal to control what Mark Zuckerberg describes as the next internet revolution: the metaverse.

From 2006 to 2009, I was an advisor to Zuckerberg. I also led my firm’s early investment in Facebook. Since then, I’ve written and spoken (again and again) about how the company has lost its way, how its culture, business practices, and algorithms undermine public health, democracy, privacy, and competition in our economy.

Over forty years in tech, I have witnessed the industry morph from a culture of empowering customers with technology to exploiting human weakness. An industry that was once dynamic and revolutionary is now controlled by a half dozen monopolies whose idea of innovation is to add just enough functionality to keep customers locked in. In many ways, Facebook is a poster child for that shift. Its journey, from dorm-room startup to a trillion dollar company, took just sixteen years.

Along the way, Facebook pioneered an entirely new industry, giving people genuinely new and useful products. Now, it behaves the way aging monopolists always do—protecting its turf, copying the best ideas from emerging players, and exploiting consumers instead of serving them. I had a front row seat to the transformation, and I’m here to tell you that Zuckerberg must be stopped.

The transformation of America’s tech industry from an engine of growth to a collection of parasitic monopolies occurred over the past fifteen years, slowly at first, then decisively. It took place at a time when there is almost no oversight of tech companies. Facebook reached the apex of our economy by gobbling up would-be competitors WhatsApp and Instagram before they posed a real threat, allowing it to corner the global market on both messaging and photo-sharing, creating giant moats to extend and protect its monopoly in social media.

Later, Facebook added a “Stories” feature copied from a smaller competitor, Snapchat, and was able to corner the market on a new way to share photos and videos that was made possible by its ownership of Instagram. Facebook admits to repeating that strategy, try to look and feel more like TikTok in the hope of preventing that emerging competitor from undercutting its social media monopoly.

Last Fall, Facebook suddenly changed its name to Meta and Zuckerberg gave a demo of the metaverse, which he described as “the next chapter for the internet.” The timing was unexpected and appeared to be rushed. Coming as it did on the heels of whistleblower Frances Haugen’s earth shaking revelations, analysts like me hypothesized that the name change was a desperate effort to change the subject. It largely worked.

When Zuckerberg says he wants to build “immersive, all-day experiences” and says the metaverse will “become the primary way that we live our lives and spend our time,” what he is really saying is that he will build a platform where every participant, from users to corporate partners to merchants, will be at his mercy. If the AICO is not passed, Meta will be able to undermine the business of any company or manipulate the choices of any user in the metaverse. The philosophy of AICO is that a company should be able to own a marketplace or participate in the marketplace, but not both.

Meta has an incentive to be open in the early days of the metaverse to attract partners and users, but the history of Facebook suggests that once it achieves critical mass, the company will engage in anticompetitive behavior. The most likely endgame is a metaverse where you can only participate with Oculus VR technology, which Meta already owns; only log-in via a Facebook account, which Meta already owns; only chat with Messenger, which Meta already owns; and pay for coffee through WhatsApp’s new payment feature, which Meta also owns. Meta would likely repeat its past behavior by buying the most promising innovators around the metaverse and then use its market power to bury the rest, just as Google and Amazon have done on their own platforms.

It is not in the world’s interest for any Big Tech company or companies to control “the next chapter” of the internet. But that is exactly what is likely if we continue to sit on our hands.

The good news is that Schumer can partially restrain Zuckerberg and the other tech monopolists with the snap of a finger. AICO would spur tech innovation and give consumers better products by cracking down on self-preferencing. The Senate Judiciary Committee has already approved it in an overwhelmingly bipartisan vote. The bill polls well in red and blue states.

Senator Schumer can and must call a vote on AICO. In doing so, he has a chance to make the internet better and sharpen America’s technological edge.

Will he seize the opportunity or will he do Big Tech’s bidding?

Source: Tech – TIME | 23 Jun 2022 | 5:40 am









© 澳纽网 Ausnz.net