Day: November 2, 2023

Climate Crisis Is Generating Global Health Crisis, UN Agency Says

Climate change threatens to reverse decades of progress toward better health and well-being, particularly in the most vulnerable communities, according to a new report by the U.N. weather agency.

In its annual State of Climate Services report, the World Meteorological Organization on Thursday warned that the climate crisis was generating a global health crisis and said that many ill effects of climate change could be tempered by adaptation and prevention measures.

WMO said climate change was causing the world to warm at a faster rate than at any other point in recorded history.

“There is no more return back to the good old milder climate of the last century.  Actually, we are heading towards a warmer climate for the coming decades, anyhow,” said Petteri Taalas, WMO secretary-general.

“Unless we are successful in phasing out this negative trend” by limiting global warming to 1.5 or 2 degrees Celsius, “we will see this situation getting worse,” he said.

The report finds countries in Africa and southern Asia are most at risk from climate change, which it says is fueling vector-borne diseases such as dengue and malaria, even in places where they were not seen before.

“And we are creating conditions for more noncommunicable diseases like lung cancer and chronic respiratory infections also, because of the bad quality of the air that we breathe,” said Maria Neira, director of the Department of Environment, Climate Change and Health at the World Health Organization. “The extreme weather events obviously will have dramatic consequences for the health of the people.”

Taalas noted that food insecurity also was growing, and that increasingly more frequent heat waves were worsening the impacts of extreme weather events.

“For example, in the Horn of Africa, during the past three years, we have had very severe food insecurity situations, which was related to both heat and drought,” he said. “And quite often in these episodes when we have heat waves, we have also very poor air quality.”

WMO said extreme heat causes more deaths than any other extreme weather event. It estimated that excessive heat killed approximately 489,000 people a year from 2000 to 2019, with 45% of these deaths in Asia and 36% in Europe.

It noted that heat waves also worsen air pollution, “which is already responsible for an estimated 7 million premature deaths every year and is the fourth-biggest killer by health risk factor.”

“There is a significant challenge by the health community to address climate change,” said Joy Shumake-Guillemot, who leads the WHO/WMO Joint Office on Climate and Health.

“We see major gaps, particularly in early-warning systems for climate-related impacts, such as extreme heat,” where only half of countries are now getting the message “about how dangerous heat conditions might be affecting them.”

She said the report focused on the power and opportunity of using climate science and services to better inform national policies.

However, while 74% of national meteorological services are providing data to the health systems in countries around the world, “only about 23% of ministries of health are really using this information in systematic ways in health surveillance systems to track the diseases that we know are influenced by climate,” she said, adding that climate services had to be further developed to address these gaps.

WMO chief Taalas agreed with this assessment, noting that climate information and services can play an important role in helping states manage extreme weather events, predict health risks and save lives.

For example, he said early-warning systems for extreme heat and for pollen to help allergy sufferers were very important. Unfortunately, he said, well-functioning early-warning services in African countries and other states were very limited.

“One of the sectors where African countries do not have services are these health services, and many African countries are not able to provide heat warnings for their populations, and their authorities have limitations in coping with such warnings,” he said.

To rectify this lapse, Taalas said WMO has established a major early-warning services program to help countries in Africa and elsewhere improve their management of environmental health and climate services.

“From our perspective,” he said, “it is very smart to prevent pandemics, and we can do so by improving the early warning services.

“This would prevent … the human casualties and we could minimize the economic losses by having proper early-warning services in place … and that is what we are very much promoting.”

more

World Leaders Agree on Artificial Intelligence Risks

World leaders at a safety summit have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence.

The inaugural two-day AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday with leaders from 28 nations, including the United States and China. The leaders agreed to work toward a “shared agreement and responsibility” about AI risks, with plans in place for further meetings to be held later this year in South Korea and France.

Leaders including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris, U.N. Secretary-General António Guterres and others discussed each of their individual testing models to ensure safety within the growth of AI.

On Thursday, the summit continued, with focused conversations among what the U.K. called a small group of countries “with shared values.” The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia.

Some leaders, including Sunak, said immediate, sweeping regulation is not the way forward, and some AI companies have feared that regulation could thwart the technology before it can reach its full potential.

At a Thursday news conference, Sunak announced another landmark agreement by countries pledging to “work together on testing the safety of new AI models before they are released.”

The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks.

The summit concluded with a discussion between Sunak and billionaire Elon Musk in front of a group of invited business leaders and journalists.

Musk praised the inclusion of China in the AI safety agreement, a decision that some condemned after many Western governments reduced their tech cooperation with China. Musk went on to stress the importance of the U.S., the U.K. and China working together to promote AI safety.

The discussion between Sunak and Musk was scheduled to air online later on Thursday.

Some information in this report came from The Associated Press and Reuters.

more

Beatles Release New Song With John, Paul, George, Ringo and AI Tech

The final Beatles recording is here.

Titled “Now and Then,” the almost impossible-to-believe track is four minutes and eight seconds of the first and only original Beatles recording of the 21st century. There’s a countdown, then acoustic guitar strumming and piano bleed into the unmistakable vocal tone of John Lennon in the song’s introduction: “I know it’s true / It’s all because of you / And if I make it through / It’s all because of you.”

More than four decades since Lennon’s murder and two since George Harrison’s death, the very last Beatles song has been released as a double A-side single with “Love Me Do,” the band’s 1962 debut single.

“Now and Then” comes from a batch of unreleased demos written by Lennon in the 1970s, which were given to his former bandmates by Yoko Ono. They used the tape to construct the songs “Free As a Bird” and “Real Love,” released in the mid-1990s. But there were technical limitations to finishing “Now and Then.”

On Wednesday, a short film titled “The Beatles — Now And Then — The Last Beatles Song” was released, detailing the creation of the track. On the original tape, Lennon’s voice was hidden and the piano was “hard to hear,” as Paul McCartney describes it. “And in those days, of course, we didn’t have the technology to do the separation.”

That changed in 2022, when the band — now a duo — was able to utilize the same technical restoration methods that separated the Beatles’ voices from background sounds during the making of director Peter Jackson’s 2021 documentary series, “The Beatles: Get Back.” And so, they were able to isolate Lennon’s voice from the original cassette and complete “Now and Then” using machine learning.

When the song was first announced in June, McCartney described artificial intelligence technology as “kind of scary but exciting,” adding: “We will just have to see where that leads.”

“To still be working on Beatles’ music in 2023 — wow,” he said in “The Beatles — Now And Then — The Last Beatles Song.” “We’re actually messing around with state-of-the-art technology, which is something the Beatles would’ve been very interested in.”

“The rumors were that we just made it up,” Ringo Starr told The Associated Press of Lennon’s contributions to the forthcoming track in September. “Like we would do that anyway.

“This is the last track, ever, that you’ll get the four Beatles on the track. John, Paul, George and Ringo,” he said.

McCartney and Starr built the track from Lennon’s demo, adding guitar parts George Harrison wrote in the 1995 sessions and a slide guitar solo in his signature style. McCartney and Starr tracked their bass and drum contributions. A string arrangement was written with the help of Giles Martin, son of the late Beatles producer George Martin — a clever recall to the classic ambitiousness of “Strawberry Fields,” or “Yesterday,” or “I Am the Walrus.” Those musicians couldn’t be told they were contributing to the last ever Beatles track, so McCartney played it off like a solo endeavor.

On Friday, an official music video for “Now and Then,” directed by Jackson, will premiere on the Beatles’ YouTube channel. It was created using footage McCartney and Starr took of themselves performing, 14 hours of “long forgotten film shot during the 1995 recording sessions, including several hours of Paul, George and Ringo working on ‘Now and Then,’ ” Jackson said in a statement.

It also uses previously unseen home movie footage provided by Lennon’s son Sean and Olivia Harrison, George’s wife, and “a few precious seconds of the Beatles performing in their leather suits, the earliest known film of the Beatles and never seen before,” provided by Pete Best, the band’s original drummer.

“The result is pretty nutty and provided the video with much needed balance between the sad and the funny,” said Jackson.

more

Destruction of Dam Leads to Archaeological Discoveries in Dnipro River

After an explosion destroyed the Kakhovka Dam in June 2023, the water level dropped in the reservoir above the dam in Ukraine’s Zaporizhzhia region. Since then, archaeologists have found hundreds of valuable artifacts in the newly exposed areas of the site in the Khortytsia National Reserve. Eva Myronova has the story, narrated by Anna Rice. VOA footage by Oleksandr Oliynyk.

more

India Probing Phone Hacking Complaints by Opposition Politicians, Minister Says

India’s cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said.

Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that “Apple confirmed it has received the notice for investigation.”

A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized.

There was no immediate comment from Apple about the investigation.

This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi’s government of trying to hack into opposition politicians’ mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: “Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID.”

A senior minister from Modi’s government also said he had received the same notification on his phone.

Apple said it did not attribute the threat notifications to “any specific state-sponsored attacker,” adding that “it’s possible that some Apple threat notifications may be false alarms, or that some attacks are not detected.”

In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi.

The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.

more

US Pushes for Global Protections for Threats Posed by AI

U.S. Vice President Kamala Harris says leaders have “a moral, ethical and societal duty” to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.

more

US Pushes for Global Protections Against Threats Posed by AI

U.S. Vice President Kamala Harris said Wednesday that leaders have “a moral, ethical and societal duty” to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap.

Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art.

“To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations,” Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms.”

Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications.

Just days earlier, President Joe Biden – who described AI as “the most consequential technology of our time” – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government.

AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve “initial operational capability.”

And perhaps on the opposite end of the spectrum, some programmer decided to “train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds.”

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as “one of the biggest threats” to society. He called for a “third-party referee.”

Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

“Here we are, for the first time, really in human history, with something that’s going to be far more intelligent than us,” said Musk, who is looking at creating his own generative AI program. “So it’s not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that’s beneficial to humanity. But I do think it’s one of the existential risks that we face and it’s potentially the most pressing one.”

This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year.

“My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways,” he told lawmakers at a Senate Judiciary Committee on May 16.

That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while “AI has been used to do pretty remarkable things” – especially in the field of scientific research – it is limited by its creators.

“It’s not necessarily doing something that humans don’t know how to do, but it’s making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly,” she told VOA on Zoom.

And, she said, “AI is not objective, or all-knowing. There’s been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns.”

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: “The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use.

“It’s OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally,” Brandt said.

Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI.

“The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions,” said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. “Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety.”

more