AI - it's all about relationships with earth, self, others - and the enemy
​
Artificial Intelligence – the very name should give us pause. Not to mention AI has lots of issues related to everything from intellectual property to job losses to our own intellect and emotions being removed from the equation. And there are also big eco- and relational impacts. Here is a summary of some of these implications from an MIT study and other sources:
- By 2028, AI could be requiring as much energy as is used by 22% of US households.
- Over half of this energy is produced by coal and gas, as AI cannot afford interruptions
in its power supply - as in Ohio, at right. And fossil fuels = climate change gases.
OpenAI’s Stargate data center in Texas will emit 3.7 million tons of CO2 per year – as much
as Iceland. A center in CA, with its strong environmental rules, will emit much less pollution
than one in West Virginia, where coal is king.
- There is also a huge draw on fresh water to cool the data centers’ equipment – some
facilities use millions of gallons per day.
- Of course we are part of the problem. The study revealed that making a five second AI
video at 16 frames per second takes as much energy as running a microwave for an hour.
- Along with eco-effects are the emotional and relational impacts: three-quarters of US teenagers have turned to AI for friendship at some point; in a decade, over half of us will be going there for advice, care, companionship. And AI algorithms magnify existing prejudices, further dividing our divided society.
- What about jobs - or lack thereof: A McKinsey report projects that by 2030, 30% of current U.S. jobs could be automated, with 60% significantly altered by AI tools. Goldman Sachs predicts up that to 50% of jobs could be fully automated by 2045, driven by generative AI and robotics.
- AI in warfare raises critical ethical dilemmas, primarily regarding accountability, the loss of human judgment in "kill decisions," and the risk of algorithmic bias violating international humanitarian law. While AI could theoretically increase precision, it risks accelerating conflict, devaluing human life, and creating accountability gaps where no human is responsible for
lethal errors. Read David Grossman's On Killing: The psychological cost of learning to
kill in war and society for good reasons to keep the "human element" in warfare, as it
makes us face - and perhaps have remorse about killing - the enemy.
- And AI is just plain intrusive: “Conversational AI models are being practically shoved down
our throats and it's getting harder and harder to opt out,” notes Dr. Sasha Lucciani, an AI
analyst.
- This TED Radio Hour presentation features Tom Gruber, who not only invented Siri, but
is now leading efforts to make AI's guiding principle to serve humans, rather than the other way around. He concludes with a comparison to the role of the matriarch in an elephant family - listen and learn what he means.
What to do?
- Don’t use AI for simple searches and tasks – other search engines can do the same
thing with lots less energy. If you use an AI program, a smaller Language Model such
as SmolLM is better – up to 150 times less energy per search!
- If a data center is being built in your area, educate yourself and your community
about the impacts – then join or organize protests and petition campaigns.
- Seek the real: real conversations and real experiences, especially across lines
that divide us by age, gender, geography, culture, vocation, and race; real encounters
of nature done together – especially with children and youth, as these are an antidote
to screen dependence, since nothing engages like nature.
- Summary: connecting to the realness of nature,
others and one’s own wisdom (thinking for
yourself) not only sidelines AI but leads to a
healthier and longer – as much as 7.5 years – life!


"CANCEL OPENAI. DELETE CHATGPT.
Despite what OPENAI wants you to believe, they are perfectly comfortable with mass surveillance of all Americans.
DOWNLOAD CLAUDE. Anthropic sought assurances from the Pentagon that its technology would not be used for mass surveillance nor for autonomous weapons systems. When the Pentagon refused - OpenAI happily stepped in AND AGAIN put profits over individual freedoms (their recurring theme).
Anthropic has continued to build CLAUDE (which is a more enjoyable user experience) with ethics, safety and individual sovereignty at the forefront."
- from NCP friend Meaghan

