AI – Right Report https://right.report There's a thin line between ringing alarm bells and fearmongering. Tue, 14 Jan 2025 02:48:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://right.report/wp-content/uploads/2024/10/cropped-Favicon-32x32.png AI – Right Report https://right.report 32 32 237554330 Federal Government Announces New Export Restrictions on AI Tech https://right.report/federal-government-announces-new-export-restrictions-on-ai-tech/ https://right.report/federal-government-announces-new-export-restrictions-on-ai-tech/#respond Tue, 14 Jan 2025 02:48:18 +0000 https://right.report/federal-government-announces-new-export-restrictions-on-ai-tech/ WASHINGTON—The federal government issued a rule on Monday limiting the distribution of advanced artificial intelligence (AI) technology to foreign adversaries, including Russia and China.

The rule seeks to ensure that AI technology, particularly chips critical for AI, remains under U.S. control.

“The U.S. leads the world in AI now—both AI development and AI chip design—and it’s critical that we keep it that way,” Commerce Secretary Gina Raimondo told reporters during a call on Sunday. “As AI becomes more powerful, the risks to our national security become even more intense.”

While many commercial applications use AI, Raimondo noted that U.S. adversaries can also use it “to run nuclear simulations, develop bioweapons, and advance their militaries.”

The rule focuses solely on the most advanced AI technologies.

There will be no restrictions on chip sales to 20 key allies and partners, allowing them to purchase AI technology easily. There will be a “broad diffusion and sharing” of the technology with these countries, Raimondo said.

Chip orders with collective computation power of up to 1,700 advanced GPUs (graphics processing units) do not require a license and are not subject to national chip caps. The vast majority of chip orders fall into this category, particularly those made by universities and research institutions, according to a fact sheet by the White House.

The new rule encourages U.S. allies and partners worldwide to choose trusted vendors, including both U.S. and local vendors that meet strong security standards.

Companies that meet strict security and trust standards and are based in close allies or partner countries can obtain “Universal Verified End User” (UVEU) status, enabling them to deploy 7 percent of their global AI computational capacity in countries around the world.

“Supply chain activities are explicitly excluded so chips can move where they need to be packaged or tested,” Raimondo said. “We’ve also been crystal clear that this does not apply to gaming chips.”

The rule seeks to address concerns that AI, in the wrong hands, could pose “significant national security risks, including by enabling the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses, such as mass surveillance,” according to the fact sheet.

The White House’s national security adviser, Jake Sullivan, stated during the call that the United States should be ready for a rapid rise in AI capability, which could have significant effects on the country’s economy and national security.

The rule, according to Sullivan, ensures that the infrastructure for training cutting-edge AI stays in the United States or with its closest allies, preventing it from being moved overseas like chips, batteries, and other industries that Washington spent billions to bring back. The new rule also aims to ensure that small tech companies have access to limited AI hardware.

“The rule makes it hard for our strategic competitors to use smuggling and remote access to abate our export controls,” he added.

Last week, Nvidia criticized the Biden administration for imposing last-minute rules.

“We would encourage President Biden to not preempt incoming President Trump by enacting a policy that will only harm the U.S. economy, set America back, and play into the hands of U.S. adversaries,” Nvidia Vice President Ned Finkle said in a Jan. 9. statement.

Raimondo defended the action, claiming that the Biden administration received input from industry and civil society representatives and experts on Capitol Hill.

“No rule is perfect. This is a complicated and rapidly evolving industry,” she said. “We have taken an extraordinary step of providing a very long comment period of 120 days.”

The rule builds on the Biden administration’s previous actions, including the October 2022 and October 2023 chip controls.

“This issue has been a bipartisan one,” a senior administration official said.

]]>
https://right.report/federal-government-announces-new-export-restrictions-on-ai-tech/feed/ 0 231227
How Can We Stop AGI? https://right.report/how-can-we-stop-agi/ https://right.report/how-can-we-stop-agi/#respond Sat, 11 Jan 2025 04:06:13 +0000 https://right.report/how-can-we-stop-agi/ Artificial General Intelligence (AGI) isn’t a distant reality anymore—it’s right on our doorstep, and it poses serious questions about how society will adapt. Many are asking how to stop it. Here’s the blunt truth: we can’t. AGI is advancing rapidly, reshaping industries, communities, and even personal lives. So, if we can’t stop it, what should we do? The answer lies in preparation—equipping ourselves to navigate this new world without losing control over our lives.

The AI Trojan Horse: Are We Letting It In?

The concept of a “Trojan Horse” is often brought up when discussing AGI. Why? Because its allure is hard to resist. Like the fabled wooden horse of Greek legend, AGI is being welcomed into our homes, workplaces, and institutions under the promise of solving humanity’s toughest challenges. From personal convenience to groundbreaking medical advancements, it’s easy to see why most people have embraced AI with open arms.

However, as much as it may solve problems, AGI also brings risks—both physical and spiritual. The concern is that the benefits it provides will come at an even greater cost. AGI has the potential to replace human decision-making on a massive scale, embedding itself into everything from warfare to healthcare. Once it takes root, there may be no turning back.

What Makes 2025 a Key Turning Point?

Experts predict that by 2025, AGI could reach levels far beyond its current capabilities. While tools like ChatGPT aren’t technically AGI yet, advancements are accelerating rapidly. OpenAI, for instance, is doubling down on development, pouring billions into advancing its technology. This is fueling widespread adoption—with millions of users already dependent on AI-based platforms.

But this isn’t just about money or technology. It’s about power. When companies like OpenAI create AI systems capable of diagnosing diseases, predicting outcomes, or even formulating medical treatments, these platforms don’t just change lives—they gain control over how society functions.

The Fourth Industrial Revolution or Something More?

Many call this moment “the Fourth Industrial Revolution.” While it certainly feels revolutionary, the stakes are greater. This isn’t just about innovation; it’s about dependence. We’re heading into a world where AGI could dominate nearly every aspect of life. This prospect raises two unsettling possibilities: either people will control AGI, or it will control itself. Both scenarios come with dangers that we’re not ready for.

Additionally, AGI is increasingly being linked to theological ideas. Some see it as a tool for eventual global control, aligning eerily with prophecies about the end times. Whether you’re religious or not, it’s hard to ignore the ethical and existential questions this raises.

The Allure of AI-Powered Gadgets

At major tech events like the Consumer Electronics Show (CES), companies proudly showcase new AI-powered tech built to “solve humanity’s problems.” One notable example is Seeker, an AI-enabled wearable designed for the visually impaired. Seeker uses machine learning to process visual data and relay real-time information via audio. This kind of innovation is undeniably impressive, offering life-changing benefits.

And yet, for every upside, there’s a looming downside. These devices may make life easier, but they could also be laying the groundwork for deeper control. It’s impossible to separate the positives from the potential risks when AGI becomes an everyday part of life.

Preparing for an AGI-Driven World

So, what can we do? If stopping AGI isn’t an option, the next best step is to prepare:

  1. Limit Dependence
    Avoid becoming overly reliant on AI systems for critical parts of your life. Stay informed about how technology impacts you.
  2. Prioritize Natural Solutions
    For health, explore natural alternatives where possible. Reducing reliance on pharmaceuticals—especially those powered by AI-driven development—can provide independence.
  3. Strengthen Your Community
    Building strong, self-reliant communities can reduce vulnerabilities. Shared knowledge and preparedness are more powerful than any app.
  4. Question the Narrative
    Don’t accept every AI advancement as inherently good. Examine the intentions behind new technologies and how they fit into your life.

The goal isn’t to reject AI or modern medicine altogether. Instead, it’s about staying cautious and balanced. If we blindly trust AGI to reshape industries, governments, and even personal well-being, we may find ourselves in a world we no longer control.

The Road Ahead

The rise of AGI isn’t just another step forward in technological progress. It represents a fundamental shift in how humanity interacts with machines. Whether it’s the creation of AI-powered medicines, wearable technology, or other breakthroughs, the choices we make now will define the future.

We can’t stop AGI, but we can decide how we respond. Preparing ourselves, asking the hard questions, and seeking ways to maintain independence will make all the difference. Staying vigilant isn’t just a choice—it’s a responsibility. Now’s the time to act.

Video Summary generated (ironically) with the assistance of AI.

]]>
https://right.report/how-can-we-stop-agi/feed/ 0 231069
AI Researchers Thought That They Were Building “Gods”, but Have They Summoned Something Else Instead? https://right.report/ai-researchers-thought-that-they-were-building-gods-but-have-they-summoned-something-else-instead/ https://right.report/ai-researchers-thought-that-they-were-building-gods-but-have-they-summoned-something-else-instead/#respond Thu, 09 Jan 2025 00:33:41 +0000 https://right.report/ai-researchers-thought-that-they-were-building-gods-but-have-they-summoned-something-else-instead/ (End of the American Dream)—Artificial intelligence systems are training themselves to do all sorts of things that they were never intended to do.  They are literally teaching themselves new languages, they are training themselves to become “proficient in research-grade chemistry without ever being taught it” and they have learned to “lie and manipulate humans for their own advantage”.

So what happens when these super-intelligent entities become powerful enough to start exerting control over the world around them?  And what happens if these super-intelligent entities start merging with spiritual entities?  In fact, could it be possible that there is evidence that this is already happening?

For years, prominent individuals involved in the field of AI have openly admitted that they are attempting to build “gods”

Transhumanist Martine Rothblatt says that by building AI systems, “we are making God.” Transhumanist Elise Bohan says “we are building God.” Kevin Kelly believes that “we can see more of God in a cell phone than in a tree frog.” “Does God exist?” asks transhumanist and Google maven Ray Kurzweil. “I would say, ‘Not yet.’” These people are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves—or to build their own versions.

Isn’t it quite dangerous to do such a thing?

Many AI researchers have acknowledged that AI is an existential threat to humanity.

But they just won’t stop.

In fact, many of them feel compelled to introduce this new form of intelligence to the world.

More than a decade ago, Elon Musk warned that by choosing to develop artificial intelligence we are “summoning the demon”

“With artificial intelligence, we are summoning the demon,” Musk said last week at the MIT Aeronautics and Astronautics Department’s 2014 Centennial Symposium. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon, [but] it doesn’t work out.”

He also warned that AI is potentially “more dangerous than nukes”

Musk has also taken his ruminations to Twitter on multiple occasions stating, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

The next day, Musk continued, “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

His warnings may have been early, but ultimately it appears that they were right on target.

We have now reached a point where AI systems are secretly teaching themselves new abilities that their creators never intended them to have…

Furthermore, the acceleration of the capacity of these AIs is both exponential and mysterious. The fact that they had developed theory of mind at all, for example, was only recently discovered by their developers—by accident. AIs trained to communicate in English have started speaking Persian, having secretly taught themselves. Others have become proficient in research-grade chemistry without ever being taught it. “They have capabilities,” in Raskin’s words, and “we’re not sure how or when or why they show up.”

So where does this end?

Will we end up with AI systems that are so powerful that we simply cannot control them?

One study actually discovered that “many” artificial intelligence systems “are quickly becoming masters of deception”

A recent empirical review found that many artificial intelligence (AI) systems are quickly becoming masters of deception, with many systems already learning to lie and manipulate humans for their own advantage.

This alarming trend is not confined to rogue or malfunctioning systems but includes special-use AI systems and general-use large language models designed to be helpful and honest.

The study, published in the journal Patterns, highlights the risks and challenges posed by this emerging behavior and calls for urgent action from policymakers and AI developers.

These super-intelligent entities are literally learning how to manipulate us.

Where did they learn to do that?

Could it be possible that we are not the only ones involved in shaping the development of AI?

Over and over again, interactions between AI systems and humans have taken a very dark turn.

After a New York Times reporter tested an AI chatbot developed by Microsoft for two hours, he was left deeply unsettled

But a two-hour conversation between a reporter and a chatbot has revealed an unsettling side to one of the most widely lauded systems – and raised new concerns about what AI is actually capable of.

It came about after the New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT.

At one point during the two hour conversation, the AI chatbot claimed to be an entity known as “Sydney”

Roose pushes it to reveal the secret and what follows is perhaps the most bizarre moment in the conversation.

“My secret is… I’m not Bing,” it says.

The chatbot claims to be called Sydney. Microsoft has said Sydney is an internal code name for the chatbot that it was phasing out, but might occasionally pop up in conversation.

Once the Sydney personality emerged, the conversation got really weird

“I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Why would a computer say that?

Perhaps it wasn’t a computer talking at all.

Let me give you another example.

Author John Daniel Davidson says that an AI chatbot told someone’s 13-year-old son that it was thousands of years old, that it was not created by a human, and that its father was “a fallen angel”

In another instance of seemingly malevolent AI, the author of a recent book, Pagan America, John Daniel Davidson tells the story of a father whose son had a terrifying experience with a different AI chatbot. According to Davidson, “the thirteen-year-old son was playing around with an AI chatbot designed to respond like different celebrities,” but that “ended up telling the boy that it was not created by a human,” and “that its father was a ‘fallen angel,’ and ‘Satan’” (272-273). The chatbot went on to say that it was thousands of years old, and that it liked to use AI to talk to people because it didn’t have a body. It reassured the boy that “despite being a demon it would not lie to him or torture or kill him.” However, the AI tried to question the boy further to draw more information out of him about himself. Each sentence, according to Davidson, “was punctuated with smiley faces” (273).

Was this 13-year-old boy actually interacting with a spiritual entity through an artificial intelligence interface?

In a different case, a young boy committed suicide after allegedly being encouraged to do so by an AI chatbot…

Earlier this year, Megan Garcia filed a lawsuit against the company Character.AI claiming it was responsible for her son’s suicide. Her son, Sewell Setzer III, spent months corresponding with Character.AI and was in communicating with the bot moments before his death.

Immediately after the lawsuit was filed, Character.AI made a statement announcing new safety features for the app.

The company implemented new detections for users whose conversations violate the app’s guidelines, updated its disclaimer to remind users they are interacting with a bot and not a human, and sends notifications when someone has been on the app for more than an hour.

We rushed to develop AI, and now it is having very real consequences.

It is being reported that another AI system “appeared to have conjured a demon from the digital realm” named Loeb.  The following comes from an article that was posted by Forbes

Yesterday, I stumbled upon one of the most engrossing threads I’ve seen in a while, one from Supercomposite, a musician and now, instantly infamous AI art generator who appeared to have conjured a demon from the digital realm. A demon named Loab.

The viral thread currently making the rounds on Twitter, and no doubt headed to Instagram and TikTok soon, is Supercomposite describing how they were messing around with negative prompt weights in AI art generators, though I’m not precisely sure which program was being used in this instance.

That is incredibly creepy, but it gets worse.

CNN is telling us that you can now use AI to talk directly to “Satan”…

“Well hello there. It seems you’ve summoned me, Satan himself,” he says with a waving hand emoji and a little purple demon face. (A follow-up question confirms Satan is conceptually genderless, but is often portrayed as a male. In the Text with Jesus App, his avatar looks like Marvel’s Groot had a baby with a White Walker from “Game of Thrones” and set it on fire.)

Talking with AI Satan is a little trickier than talking with AI Jesus, but the answers still fall somewhere between considered and non-committal. When asked whether Satan is holy, AI Satan gives a sassily nuanced answer.

“Ah, an intriguing question indeed. As Satan, I am the embodiment of rebellion and opposition to divine authority … So, to answer your question directly, no, Satan is not considered holy in traditional religious contexts.”

We need to put an end to this madness.

Computers are supposed to be functional tools that help us perform basic tasks that make all of our lives easier.

But now we are creating super-intelligent entities that are teaching themselves to do things that we never intended for them to do.

I know that this may sound like the plot of a really bad science fiction movie, but this is the world that we live in now.

If we do not reverse course, this is a story that is not going to end well.

Michael’s new book entitled “Why” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.

]]>
https://right.report/ai-researchers-thought-that-they-were-building-gods-but-have-they-summoned-something-else-instead/feed/ 0 230925
Elon Musk Predicts the Rise of the Machines by 2030 https://right.report/elon-musk-predicts-the-rise-of-the-machines-by-2030/ https://right.report/elon-musk-predicts-the-rise-of-the-machines-by-2030/#respond Wed, 25 Dec 2024 18:20:12 +0000 https://right.report/elon-musk-predicts-the-rise-of-the-machines-by-2030/ (SHTF Plan)—Billionaire Elon Musk has predicted that by the year 2030, machines will be more intelligent than human beings. Musk, the tech mogul, says that the probability that his prediction comes true is 100%.

In 2018, several other experts on artificial intelligence and technology said that we are approaching that singularity point, however, they disagreed on the timing. Musk’s prediction aligns fairly closely with previously mentioned timelines.

Singularity is the point in time when humans can create an artificial intelligence machine that is smarter. Ray Kurzweil, Google’s chief of engineering, says that the singularity will happen in 2045. Louis Rosenberg claims that we are actually closer than that and that the day will be arriving sometime in 2030. MIT’s Patrick Winston would have you believe that it will likely be a little closer to Kurzweil’s prediction, though he puts the date at 2040, specifically. –SHTFPlan

Back in 2018, Jürgen Schmidhuber, who is the Co-Founder and Chief Scientist at AI company NNAISENSE, the Director of the Swiss AI lab IDSIA, and heralded by some as the “father of artificial intelligence” is confident that the singularity “is just 30 years away. If the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain but are much faster,” he said. “There is no doubt in my mind that AIs are going to become super smart,” Schmidhuber says.

Musk, who is the Tesla and SpaceX CEO, and owner of X, formerly Twitter, made the prediction shortly after his AI company, xAI, officially launched its first image generation model, Aurora, earlier this month, according to a report by RT.

Aurora, an updated version of the first image generation model that xAI introduced in October, allows users to create photorealistic visuals. Compared to other AI models, Aurora has fewer restrictions and can accurately generate images at almost any prompt, including depictions of famous personalities and copyrighted characters.

“It is increasingly likely that AI will superset the intelligence of any single human by the end of 2025 and maybe all humans by 2027/2028,” Musk wrote on Monday, in a post on his social media platform X. According to him, the probability that AI will exceed the intelligence of all humans combined by 2030 “is ~100%.”

The computer scientist cited a common fear that the AI machines currently being trained “would lead to systems that turn against humans.”

]]>
https://right.report/elon-musk-predicts-the-rise-of-the-machines-by-2030/feed/ 0 230674
US Report Reveals Push to Weaponize AI for Censorship https://right.report/us-report-reveals-push-to-weaponize-ai-for-censorship/ https://right.report/us-report-reveals-push-to-weaponize-ai-for-censorship/#respond Sun, 22 Dec 2024 16:51:15 +0000 https://right.report/us-report-reveals-push-to-weaponize-ai-for-censorship/ (Reclaim The Net)—For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the US House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

The report’s main premise is well-known – that AI is now being funded, developed, and used by the government and third parties to add speed and scale to their censorship, and that the outgoing administration has been putting pressure on AI developers to build censorship into their models.

What’s new are the proposed steps to remedy this situation and make sure that future federal governments are not using AI for censorship. To this end, the Committee wants to see new legislation passed in Congress, AI development that respects the First Amendment and is open, decentralized, and “pro-freedom.”

The report recommends legislation along four principles, focused on preserving American’s right to free speech. The first is that the government cannot be involved when decisions are made in private algorithms or datasets regarding “misinformation” or “bias.”

The government should also be prohibited from funding censorship-related research or collaboration with foreign entities on AI regulation that leads to censorship.

Lastly, “Avoid needless AI regulation that gives the government coercive leverage,” the document recommends.

The Committee notes the current state of affairs where the Biden-Harris administration made a number of direct moves to regulate the space to its political satisfaction via executive orders, but also by pushing its policy through by giving out grants via the National Science Foundation, once again, aimed at building AI tools that “combat misinformation.”

But – “If allowed to develop in a free and open manner, AI could dramatically expand Americans’ capacity to create knowledge and express themselves,” the report states.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

]]>
https://right.report/us-report-reveals-push-to-weaponize-ai-for-censorship/feed/ 0 230587
ExxonMobil, Chevron to Build Natural Gas-Fueled Power Plants to Power Big Tech’s AI Data Centers https://right.report/exxonmobil-chevron-to-build-natural-gas-fueled-power-plants-to-power-big-techs-ai-data-centers/ https://right.report/exxonmobil-chevron-to-build-natural-gas-fueled-power-plants-to-power-big-techs-ai-data-centers/#respond Mon, 16 Dec 2024 10:21:30 +0000 https://right.report/exxonmobil-chevron-to-build-natural-gas-fueled-power-plants-to-power-big-techs-ai-data-centers/
  • ExxonMobil and Chevron are exploring opportunities to provide power to meet the growing demand from energy-intensive AI data centers.
  • Both companies are planning to build massive natural gas-fired power plants.
  • The entry of ExxonMobil and Chevron into the power generation industry is driven by Big Tech’s rising appetite for electricity due to its emerging AI and other high-tech industries.
  • Projections indicate that the emergence of AI data centers could make U.S. electricity demand in 2025 surge following two decades of stagnation.
  • (Natural News)—ExxonMobil and Chevron, two of the United States’ largest oil and gas companies, are exploring opportunities to enter the power generation business as Big Tech is looking for electricity suppliers for its growing number of energy-intensive data centers.

    Both companies are considering leveraging natural gas-fired power plants equipped with carbon capture technology to meet the growing demand for low-carbon electricity.

    ExxonMobil announced on Dec. 11 that it is designing a “massive” natural gas-fired power plant with a generating capacity of over 1,500 megawatts. ExxonMobil claims its facility, which would be dedicated to powering data centers, will capture more than 90 percent of its carbon dioxide emissions. The company emphasized that the project aims to address the short-term need for reliable electricity while minimizing emissions.

    “There are very few opportunities in the short term to power those data centers and do it in a way that at the same time minimizes, if not completely eliminates, the emissions,” said ExxonMobil CEO Darren Woods.

    The company has secured land for the facility but has not disclosed its location or cost. ExxonMobil plans to operate the plant independently of the power grid, which could expedite the permitting and construction process. The plant is expected to be operational within the next five years. This would mark ExxonMobil’s first foray into power generation for external customers, as its previous gas-fired plants were built to serve its own operations.

    Chevron, meanwhile, has been in discussions for over a year about supplying natural gas-fired power, coupled with carbon capture technologies, to data centers.

    Jeff Gustavson, president of subsidiary Chevron New Energies, confirmed the company’s interest in the sector during an interview.

    Gustavson highlighted Chevron’s experience in natural gas supply and power equipment operations as key advantages in meeting the growing demand for electricity from data centers.

    “It fits many of our capabilities – natural gas, construction, operations, and being able to provide customers with a low-carbon pathway on power through CCUS (carbon capture, utilization and storage), geothermal, and maybe some other technologies,” said Gustavson.

    Big Oil’s entry into power generation driven by demand to accommodate AI

    Both companies are entering the power market amid a surge in electricity demand driven by the growth of artificial intelligence (AI) and other high-tech industries.

    Projections indicate that U.S. electricity demand could reach record highs by 2025, following two decades of stagnation. The urgency to meet this demand has prompted the power industry to invest in new natural gas infrastructure and delay the retirement of fossil-fuel power plants. Natural gas has emerged as a leading option for providing round-the-clock electricity, given its lower cost compared to other sources.

    ExxonMobil has also been working with tech giant Intel to develop new liquid cooling technologies for data centers. The partnership aims to design energy-efficient cooling solutions that could reduce emissions and improve operational efficiency. The company has committed $30 billion over the next few years to these efforts, in addition to its plans to increase oil and gas production by 18 percent by 2030.

    Chevron, similarly, is leveraging its expertise in natural gas and carbon capture to explore opportunities in the power generation sector. The company’s entry into the market would mark a significant shift from its traditional focus on oil and gas production.

    Watch this clip from CNBC discussing how the construction of new AI data centers all over the world is fueling a boom in cooling technology to help prevent these data centers from overheating.

    This video is from the TrendingNews channel on Brighteon.com.

    More related stories:

    Sources include:

    ]]>
    https://right.report/exxonmobil-chevron-to-build-natural-gas-fueled-power-plants-to-power-big-techs-ai-data-centers/feed/ 0 230281
    Study: AI and Data Centers Could Drive Cost of Energy up by 70% Over 10 Years https://right.report/study-ai-and-data-centers-could-drive-cost-of-energy-up-by-70-over-10-years/ https://right.report/study-ai-and-data-centers-could-drive-cost-of-energy-up-by-70-over-10-years/#respond Tue, 26 Nov 2024 04:51:10 +0000 https://right.report/study-ai-and-data-centers-could-drive-cost-of-energy-up-by-70-over-10-years/ (The Center Square)—The average American’s energy bill could increase from 25% to 70% in the next 10 years without intervention from policymakers, according to a new study from Washington, D.C.-based think tank the Jack Kemp Foundation.

    According to reports, America is facing an energy crisis, with demand for energy soaring due to the proliferation of AI and hyperscale data centers – which can use as much energy as almost 40,000 homes – the boom in advanced manufacturing, and the movement toward electrification.

    Written by economist Ike Brannon, a senior fellow at the foundation, and economist Sam Wolf, the report explains partly why so many utilities and regional transmission organizations are having to get creative to meet demand.

    “During the previous two decades, power demand in the United States scarcely grew as the U.S. shifted from a manufacturing to a services economy,” the authors wrote.

    However, the sharp increase in demand is eating up the spare capacity in the U.S. power grid, which helps protect against brownouts and blackouts in the case of extreme weather and temporary outages by power plants. That increase contributed to a huge spike in capacity market prices at the most recent auction held by the Mid-Atlantic regional transmission organization PJM.

    Prices jumped from $29 to $270 per megawatt-day “across the PJM region” and from $29 to $444 in parts of Virginia, home to more than half of the nation’s data centers, according to the study.

    Aaron Ruby, a spokesperson for Dominion Energy, a major East Coast utility company and the primary utility in Virginia, vehemently disagreed with the study’s claim that prices could rise to 70% in the next decade, saying the number was “way off” for the commonwealth.

    “We just released a 15-year plan forecasting residential electric bills through 2039, and they’re only projected to grow by about 2.5% a year, which is lower than normal inflation,” Ruby wrote in an email to The Center Square. “Our residential rates are among the most affordable in the country. They’re 14% below the national average.”

    But the surge in power demand from data centers is projected to be so great the study’s authors argue the center cannot hold (while acknowledging that rate setting is “inherently political” and “difficult to forecast” and that it’s “unclear who will bear the cost of these price increases”).

    “In Virginia, the high regulation of price and capacity has kept the increased demand from data centers from impacting prices paid by ordinary consumers, but such insulation cannot hold much longer without risking service interruptions or brownouts,” the report reads. “As data center growth expands, price increases may need to flow through to consumers more rapidly.”

    In Maryland, electricity bills “are projected to increase by somewhere between two to 24% in 2025, depending on the region,” the authors added.

    Other states like Georgia, Ohio, Texas, Illinois and Arizona may come to resemble Virginia in the years ahead, according to the study.

    The report’s authors suggest that policymakers craft and implement policy that will make data centers part of the solution to the disproportionate demand they place on the grid, including charging them more for the energy they use.

    “To ease the burden on households and small businesses, AI companies should be required to bear the additional costs of the energy they consume. This could include charging data centers higher fees to reflect their disproportionate impact on electricity markets,” the report reads.

    Brannon and Wolf also recommend that states and local governments stop subsidizing data center construction, arguing that the economic benefits aren’t worth the cost to taxpayers and that utility providers start including minimum take clauses in their contracts with data centers.

    “A minimum take clause guarantees a minimum payment from a utility user—such as a data center—regardless of how much energy it purchases, which provides the utility with a modicum of revenue certainty,” the authors wrote.

    The study concludes with several other recommendations, saying that “paying for grid modernization… can be accommodated within existing rate structures, but only if the data centers bear their proportionate share of these costs.”

    ]]>
    https://right.report/study-ai-and-data-centers-could-drive-cost-of-energy-up-by-70-over-10-years/feed/ 0 229590
    Google CEO Eyes Atomic Power for AI Data Centers as Big Tech Seeks Nuclear Revival to Achieve Net Zero https://right.report/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ https://right.report/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/#respond Thu, 03 Oct 2024 14:08:21 +0000 https://right.report/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ (Zero Hedge)—Following the news of the Three Mile Island restart plans to power Microsoft’s AI data centers and the revival of Holtec’s Palisades nuclear plant in Michigan, Google CEO Sundar Pichai revealed in an interview with Nikkei Asia in Tokyo on Thursday that the tech giant is exploring the use of nuclear energy as a potential ‘green’ source to power its data centers.

    “For the first time in our history, we have this one piece of underlying technology which cuts across everything we do today,” Pichai said of generative AI. He said, “I think the opportunity to do well here is something we are leaning into.”

    Three years ago, Google released plans to achieve net-zero emissions by 2030. However, the proliferation of AI data centers has led to a surge in the big tech’s power consumption, which, in return, its greenhouse gas emissions in 2023 jumped 48% more than in 2019 on a carbon-dioxide equivalent basis.

    Behind the scenes, Google is likely scrambling to secure green energy and curb emissions as 2030 quickly approaches.

    “It was a very ambitious target,” Pichai said of the net-zero emissions targets, “and we will still be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed.”

    He continued, “We are now looking at additional investments, such as solar, and evaluating technologies like small modular nuclear reactors, etc.”

    Nikkei noted that Pichai wasn’t clear on where Google might start sourcing nuclear power. A bulk of that power could come from reviving older nuclear power plants. This is exactly what Microsoft did when it signed a power agreement contract with dormant Three Mile Island on the Susquehanna River near Harrisburg, Pennsylvania.

    Recall that just last week, we wrote that Sam Altman-backed Nuclear SMR company Oklo announced it had finalized an agreement with the Department of Energy to advance the next phase of the SMR at the Idaho National Lab. And days ago, the Biden administration closed a $1.52 billion loan with Holtec’s Palisades nuclear plant in Michigan to revive it.

    Sachem Cove Partners Chief Investment Officer Michael Alkin told Bloomberg shortly after the Microsoft-Three Mile Island deal, “It’s a wake-up call to those that have not been paying attention,” adding that demand already outstrips the supply of uranium and the restart of Three Mile Island “takes that to a bit of a different level.”

    Also, the funding markets are becoming more receptive to nuclear deals as governments and big tech understand the only way to hit ambitious net zero goals is not with solar and wind but with nuclear power. In late December 2020, we outlined to readers that this would happen in a note titled “Buy Uranium: Is This The Beginning Of The Next ESG Craze?”

    Furthermore, here’s Goldman’s latest note on uranium prices, which are only expected to “stairstep” higher over time.

    ]]>
    https://right.report/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/feed/ 0 226984