Technology News | World

Kamala Harris Debuts Her New TikTok Account

Vice President Kamala Harris Delivers A Keynote At The American Federation of Teachers' 88th National Convention In Houston

“Well I’ve heard that recently, I’ve been on the For You page, so I thought I’d get on here myself,” Vice President Kamala Harris said in her new personal account’s first post on TikTok, the Chinese-owned short-form video platform on which she’s become a seemingly overnight sensation since launching her campaign for President earlier this week.

[time-brightcove not-tgx=”true”]
@kamalaharris

Thought it was about time to join!

♬ original sound – Kamala Harris

Within six hours of joining TikTok on Thursday, Harris had already amassed more than one million followers.

President Joe Biden’s presidential campaign in February created a campaign TikTok account, @bidenhq, which was rebranded earlier this week to @KamalaHQ and has since more than quadrupled in followers to over 1.6 million.

“Our job as a campaign is to break through the noise and make sure we’re talking to voters wherever they are—TikTok is one of those landscapes, and we’re leaving no stone unturned,” deputy campaign manager Rob Flaherty told People in a statement on Thursday. “Getting the Vice President up on TikTok means she’ll be able to directly engage with a key constituency in a way that’s true and authentic to the platform and the audience.”

Harris’ nascent presidential campaign and its supporters have leaned into the new presumptive Democratic nominee’s social media virality, embracing “brat” and “coconut tree” memes in an apparent attempt to engage with younger voters that’s already appearing to pay dividends.

According to a new Axios/Generation Lab poll, among 18- to 34-year-olds, Harris has more than triple the lead over Trump (+20%) compared to what Biden had (+6%).

Harris joins TikTok—which Trump joined in June—as the platform has come under increasing bipartisan scrutiny in the U.S., largely due to security concerns about its Beijing-based parent company ByteDance.

Harris told ABC News in a March interview that the Biden-Harris administration doesn’t plan to ban TikTok, which she said has “very important” benefits, including as a platform for people to make money and share information. “We need to deal with the owner and we have national security concerns about the owner of TikTok, but we have no intention to ban TikTok,” she said.

However, Biden signed into law in April a bill that requires ByteDance to divest its stake in TikTok, which the company has said it will not do, within a year or face a ban in the U.S.

Source: Tech – TIME | 26 Jul 2024 | 5:30 pm

Why Colin Kaepernick Is Starting an AI Company

Colin Kaepernick

When NFL quarterback Colin Kaepernick began kneeling during the national anthem to protest police brutality and racial injustice in 2016, he soon found himself out of a job, eventually moving onto other ventures in media and entertainment. Today, he’s entering the AI industry by launching a project he says he hopes will allow others to bypass “gatekeeping:” an artificial intelligence platform called Lumi.

[time-brightcove not-tgx=”true”]

The new subscription-based platform aims to provide tools for storytellers to create, illustrate, publish and monetize their ideas. The company has raised $4 million in funding led by Alexis Ohanian’s Seven Seven Six, and its product went live today, July 24.

In an interview with TIME, Kaepernick says this project can be viewed as an extension of his activism. “The majority of the world’s stories never come to life. Most people don’t have access or inroads to publishers or platforms—or they may have a gap in their skillset that’s a barrier for them to be able to create,” he says. “We’re going to see a whole new world of stories and perspectives.”

Kaepernick says that the idea for Lumi came out of challenges he faced while building his media company, Ra Vision Media, and his publishing company, Kaepernick Publishing, which included “long production timelines, high costs, and creators not having ownership over the work they create,” he says. When ChatGPT, Dall-E, and other AI models broke through to the mainstream a couple years ago, Kaepernick started playing with the tools, even trying to use them to create a children’s book. (Kaepernick penned a graphic novel, Change the Game, based on his high school experiences, last year.)

Lumi aims to help independent creators forge hybrid written-illustrated stories, like comics, graphic novels, and manga. The platform is built “on top of foundational models,” Kaepernick says—although he declined to say which ones. (Foundational models are large, multi-purpose machine learning models like Chat-GPT.) Users interact with a chatbot to create a character, flesh out their backstory and traits, and build a narrative. Then they use an image-generation tool to illustrate the character and their journey. “You can go back and forth with your AI companion and test ideas, ‘I want to change the ending,’ or ‘I want it to be more comedic or dramatic,’” he says. 

The users can then publish and distribute their stories right on the Lumi platform, order physical copies, and use AI tools to create and sell merchandise based on their IP. Kaepernick hopes that the platform will appeal to aspiring creators with gaps in their skill sets—whether that means athletes who have a story and an audience but lack illustrating chops, or content creators who are having trouble monetizing their work.

“We talked to hundreds of creators and asked what their pain points were,” he says. “Some were trying to fundraise money to get projects off the ground. Others don’t know how to actually enter the space, or don’t have a pathway or have been rejected. And other creators didn’t want to handle the logistics of fundraising and manufacturing and project management and distribution. We hope that this creates a path for people to actually thrive off of the creativity that they’re bringing to the world.” 

Read More: Colin Kaepernick, TIME Person of the Year 2017, The Short List

Lumi will give creators full ownership of the works they create on the platform, Kaepernick says. When asked about how the company might deal with works that are created on Lumi but are alleged to have infringed on pre-existing copyrights, Kaepernick responded: “We’re going to build on the foundational models, and we’re going to let the legislators and everybody figure out what the laws and parameters are going to be.”

Kaepernick is well aware that there is significant mistrust and criticisms within creative industries about the rise of AI and its potential to take away jobs. Spike Lee, for instance, who signed on to direct an upcoming documentary about Kaepernick, said in a February interview that “the danger that AI could do to cinemas is nothing compared to what it could do to the world.” Concerns about AI were also at the center of the Hollywood strikes last year. 

“I understand the concerns,” Kaepernick says. “The creators have to be in the driver’s seat. This is another tool for them to be able to hopefully create in a better, more effective way, and that gives them freedom to create stories that they wanted to but couldn’t before.” Kaepernick compares these new AI tools to the iPhone’s impact on allowing a much larger swath of people to experiment with photography. “We saw a whole new world of photography and photos,” he adds. “But that didn’t eliminate traditional photographers or their craft and expertise. We look at this in a similar way.”

Kaepernick’s team includes engineers formerly at Apple (Stefan Dasbach) and Reflex AI (Sam Fazel). A representative for Lumi declined to disclose the monthly price of the platform. Creators can begin signing up for the beta version on July 24.

Source: Tech – TIME | 25 Jul 2024 | 4:00 am

Mark Zuckerberg Just Intensified the Battle for AI’s Future

Meta CEO Mark Zuckerberg

The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?

On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.

[time-brightcove not-tgx=”true”]

At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.

Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.

In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 

But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”


The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.

Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.

But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”

Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”

Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”

Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”

Source: Tech – TIME | 25 Jul 2024 | 3:45 am

AI Testing Mostly Uses English Right Now. That’s Risky

In this photo illustration, the home page of the ChatGPT

Over the last year, governments, academia, and industry have invested considerable resources into investigating the harms of advanced AI. But one massive factor seems to be continuously overlooked: right now, AI’s primary tests and models are confined to English.

Advanced AI could be used in many languages to cause harm, but focusing primarily on English may leave us with only part of the answer. It also ignores those most vulnerable to its harms.

[time-brightcove not-tgx=”true”]

After the release of ChatGPT in November, 2022, AI developers expressed surprise at a capability displayed by the model: It could “speak” at least 80 languages, not just English. Over the last year, commentators have pointed out that GPT-4 outperforms Google Translate in dozens of languages. But this focus on English for testing leaves open the possibility that the evaluations may be neglecting capabilities of AI models that become more relevant for other languages. 

As half the world steps out to the ballot box this year, experts have echoed concerns about the capacity of AI systems to not only be “misinformation superspreaders,” but also its ability to threaten the integrity of elections. The threats here range from “deepfakes and voice cloning” to “identity manipulation and AI-produced fake news.” The recent release of “multi-models”—AI systems which can also speak, see, and hear everything you do—such as GPT-4o and Gemini Live by tech giants OpenAI and Google, seem poised to make this threat even worse. And yet, virtually all discussions on policy, including May’s historic AI Safety Summit in Seoul and the release of the long-anticipated AI Roadmap in the U.S. Senate, neglect non-English languages.

This is not just an issue of leaving some languages out over others. In the U.S., research has consistently demonstrated that English-as-a-Second-Language (ESL) communities, in this context predominantly Spanish-speaking, are more vulnerable to misinformation than English-as-a-Primary-Language (EPL) communities. Such results have been replicated for cases involving migrants generally, both in the United States and in Europe where refugees have been effective targets—and subjects—of these campaigns. To make matters worse, content moderation guardrails on social media sites—a likely fora for where such AI-generated falsehoods would proliferate—are heavily biased towards English. While 90% of Facebook’s users are outside the U.S. and Canada, the company’s content moderators just spent 13% of their working hours focusing on misinformation outside the U.S. The failure of social-media platforms to moderate hate speech in Myanmar, Ethiopia, and other countries in embroiled in conflict and instability further betrays the language gap in these efforts.

Even as policymakers, corporate executives and AI experts prepare to combat AI-generated misinformation, their efforts cast a shadow over those most likely to be targeted and vulnerable to such false campaigns, including immigrants and those living in the Global South.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

This discrepancy is even more concerning when it comes to the potential of AI systems to cause mass human casualties, for instance, by being employed to develop and launch a bio-weapon. In 2023, experts expressed fear that large language models (LLMs) could be used to synthesize and deploy pathogens of potential pandemic potential. Since then, a multitude of research papers investigated this problem have been published both from within and outside industry. A common finding of these reports is that the current generation of AI systems are as good as and not better than search engines like Google in providing malevolent actors with hazardous information that could be use to build bio-weapons. Research by leading AI company OpenAI yielded this finding in January 2024, followed by a report by the RAND Corporation which showed a similar result.

What is astonishing about these studies is the near-complete absence of testing in non-English languages. This is especially perplexing as most Western efforts to combat non-state actors are concentrated in regions of the world where English is rarely spoken as first language. The claim here is not that Pashto, Arabic, Russian, or other languages may yield more dangerous results than in English. The claim, instead, is simply that using these languages is a capability jump for non-state actors that are better versed in non-English languages.

Read More: How English’s Global Dominance Fails Us

LLMs are often better translators than traditional services. It is much easier for a terrorist to simply input their query into a LLM in a language of their choice and directly receive an answer in that language. The counterfactual point here, however, is relying on clunky search engines in their own language, using Google for their language queries (which often only yields results published on the internet in their language), or go through an arduous process of translation and re-translation to get English language information with the possibility of meanings being lost. Hence, AI systems are making non-state actors just as good as if they spoke fluent English. How much better that makes them is something we will find out in the months to come.

This notion—that advanced AI systems may provide results in any language as good as if asked in English—has a wide range of applications. Perhaps the most intuitive example here is “spearphishing,” targeting specific individuals using manipulative techniques to secure information or money from them. Since the popularization of the “Nigerian Prince” scam, experts posit a basic rule-of-thumb to protect yourself: If the message seems to be written in broken English with improper grammar chances, it’s a scam. Now such messages can be crafted by those who have no experience of English, simply by typing their prompt in their native language and receiving a fluent response in English. To boot,  this says nothing about how much AI systems may boost scams where the same non-English language is used in input and output.

It is clear that the “language question” in AI is of paramount importance, and there is much that can be done. This includes new guidelines and requirements for testing AI models from government and academic institutions, and pushing companies to develop new benchmarks for testing which may be less operable in non-English languages. Most importantly, it is vital that immigrants and those in the Global South be better integrated into these efforts. The coalitions working to keep the world safe from AI must start looking more like it.

Source: Tech – TIME | 24 Jul 2024 | 11:00 pm

UK porn watchers could have faces scanned

New draft guidance sets out how porn websites and apps should stop children viewing their content.

Source: BBC News - Technology | 6 Dec 2023 | 12:04 am

GTA 6: Trailer for new game revealed after online leak

Rockstar Games releases the trailer 15 hours earlier than expected after it is leaked online.

Source: BBC News - Technology | 5 Dec 2023 | 11:57 pm

Ex-Tesla employee casts doubt on car safety

A whistleblower believes the self-driving vehicle technology is not safe enough for public roads.

Source: BBC News - Technology | 5 Dec 2023 | 9:01 pm

Booking.com users angry at firm's response to hacks

Customers say they have been failed and feel let down after losing hundreds of pounds to fraudsters.

Source: BBC News - Technology | 5 Dec 2023 | 2:21 am

Amazon, Valentino file joint lawsuit over shoes counterfeiting

Italian luxury brand Valentino and Internet giant Amazon have filed a joint lawsuit against New York-based Kaitlyn Pan Group for allegedly counterfeiting Valentino's shoes and offering them for sale online.

Source: Reuters: Technology News | 19 Jun 2020 | 2:48 am

DC superheroes coming to your headphones as Spotify signs podcast deal

Podcasts featuring Batman, Wonder Woman and Superman will soon stream on Spotify as the Swedish music streaming company has signed a deal with AT&T Inc's Warner Bros and DC Entertainment.

Source: Reuters: Technology News | 19 Jun 2020 | 2:44 am

UK ditches COVID-19 app model to use Google-Apple system

Britain on Thursday said it would switch to Apple and Google technology for its test-and-trace app, ditching its current system in a U-turn for the troubled programme.

Source: Reuters: Technology News | 19 Jun 2020 | 2:42 am

Russia lifts ban on Telegram messaging app after failing to block it

Russia on Thursday lifted a ban on the Telegram messaging app that had failed to stop the widely-used programme operating despite being in force for more than two years.

Source: Reuters: Technology News | 19 Jun 2020 | 1:54 am

Galaxy S9's new rival? OnePlus 6 will be as blazingly fast but with 256GB storage

OnePlus 6 throws down the gauntlet to Samsung's Galaxy S9, with up to 8GB of RAM and 256GB storage.

Source: Latest articles for ZDNet | 5 Apr 2018 | 12:25 am

Mozilla launches new effort to lure users back to the Firefox browser

With a revamped browser and a nonprofit mission focused on privacy and user empowerment, Mozilla is ready to strike while the iron's hot.

Source: Latest articles for ZDNet | 4 Apr 2018 | 11:00 pm

Intel: We now won't ever patch Spectre variant 2 flaw in these chips

A handful of CPU families that Intel was due to patch will now forever remain vulnerable.

Source: Latest articles for ZDNet | 4 Apr 2018 | 10:49 pm

​Cloud computing: Don't forget these factors when you make the move

Neglecting some basic issues could leave your cloud computing project struggling.

Source: Latest articles for ZDNet | 4 Apr 2018 | 10:34 pm

GDS loses government data policy to DCMS

Source: ComputerWeekly.com | 30 Mar 2018 | 5:58 pm

Europol operation nabs another 20 cyber criminals

Source: ComputerWeekly.com | 30 Mar 2018 | 6:15 am

Business unaware of scale of cyber threat

Source: ComputerWeekly.com | 30 Mar 2018 | 1:45 am

UK government secures public sector discounts on Microsoft cloud products to April 2021

Source: ComputerWeekly.com | 29 Mar 2018 | 11:58 pm

Fitbit warns over tough competition, after selling fewer devices in 2017



Source: Technology | 27 Feb 2018 | 11:38 am

Samsung Galaxy S9 and S9+: The best deals and where to buy



Source: Technology | 27 Feb 2018 | 7:09 am

Google Glass set for comeback, hardware boss hints



Source: Technology | 27 Feb 2018 | 5:47 am

Amazon plans fix for Echo speakers that expose children to explicit songs



Source: Technology | 27 Feb 2018 | 4:50 am

Driverless 'Roborace' car makes street track debut

It is a car kitted out with technology its developers boldly predict will transform our cities and change the way we live.

Source: CNN.com - Technology | 19 Nov 2016 | 9:21 am

How to outsmart fake news in your Facebook feed

Fake news is actually really easy to spot -- if you know how. Consider this your New Media Literacy Guide.

Source: CNN.com - Technology | 19 Nov 2016 | 9:21 am

Flying a sports car with wings

Piloting one of the breed of light aircraft is said to be as easy as driving a car

Source: CNN.com - Technology | 19 Nov 2016 | 9:17 am

Revealed: Winners of the 'Oscars of watches'

It's the prize giving ceremony that everyone's on time for.

Source: CNN.com - Technology | 19 Nov 2016 | 9:17 am











© 澳纽网 Ausnz.net