Day: August 24, 2023

Death Toll Rises to Five in Poland Legionnaires’ Disease Outbreak

The death toll from an outbreak of Legionnaires’ disease in Rzeszow, southeast Poland, has risen to five, local authorities said Thursday as they tried to detect the contamination source. 

The fifth victim was a woman, 79, admitted to the hospital a few days ago. 

“She was a patient with multiple long-term conditions, including cancer, and had been in the anesthesiology and intensive care ward,” the director of the Rzeszow municipal hospital, Grzegorz Materna, told state news agency PAP. 

In all, at least 71 people have been hospitalized in the outbreak.

Legionnaires’ disease, caused by Legionella bacteria, is not contagious and cannot be spread directly from person to person, but can multiply in water and air-conditioning systems. It causes pulmonary issues, especially for people with weak immune systems. 

“The hypothesis of the municipal water supply network as the source of infection is being verified,” the Polish health ministry said Thursday on X, the app formerly known as Twitter, after an overnight emergency meeting in Rzeszow. 

But the test results of samples taken from the water system are not expected until Monday. 

In the meantime, the authorities in Rzeszow, a city of nearly 200,000 residents, vowed to carry out additional disinfection work. 

According to the local authorities, all five victims in the Rzeszow outbreak were elderly people.

more

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

more

India Becomes Fourth Country to Land Spacecraft on Moon

On Wednesday, India became the first country to put an unmanned lander and robotic probe on the moon’s south pole. Experts say the mission marks a major milestone in the country’s efforts to emerge on the front lines of space exploration. Anjana Pasricha reports from New Delhi.

more

Fukushima Nuclear Plant Begins Releasing Radioactive Water Into Sea

The operator of the tsunami-wrecked Fukushima Daiichi nuclear power plant says it has begun releasing its first batch of treated radioactive water into the Pacific Ocean.

In a live video from a control room at the plant Thursday, Tokyo Electric Power Company Holdings showed a staff member turn on a seawater pump, marking the beginning of the controversial project that is expected to last for decades.

“Seawater pump A activated,” the main operator said, confirming the release was under way.

Japanese fisher groups have opposed the plan out of worry of further damage to the reputation of their seafood. Groups in China and South Korea have also raised concern, making it a political and diplomatic issue.

But the Japanese government and TEPCO say the water must be released to make room for the plant’s decommissioning and to prevent accidental leaks. They say the treatment and dilution will make the wastewater safer than international standards and its environmental impact will be negligibly small. But some scientists say long-term impact of the low-dose radioactivity that remains in the water needs attention.

The water release begins more than 12 years after the March 2011 nuclear meltdowns, caused by a massive earthquake and tsunami. It marks a milestone for the plant’s battle with an ever-growing radioactive water stockpile that TEPCO and the government say have hampered the daunting task of removing the fatally toxic melted debris from the reactors.

The pump activated Thursday afternoon would send the first batch of the diluted, treated water from a mixing pool to a secondary pool, where the water is then discharged into the ocean through an undersea tunnel. The water is collected and partly recycled as cooling water after treatment, with the rest stored in around 1,000 tanks, which are already filled to 98% of their 1.37-million-ton capacity.

Those tanks, which cover much of the plant complex, must be freed up to build the new facilities needed for the decommissioning process, officials said.

Prime Minister Fumio Kishida said it is indispensable and cannot be postponed. He noted an experimental removal of a small amount of the melted debris from the No. 2 reactor is set for later this year using a remote-controlled giant robotic arm.

TEPCO executive Junichi Matsumoto said Thursday’s release was to begin with the least radioactive water to ensure safety.

Final preparation for the release began Tuesday, when just one ton of treated water was sent from a tank for dilution with 1,200 tons of seawater, and the mixture was kept in the primary pool for two days for final sampling to ensure safety, Matsumoto said. A batch of 460 tons was to be sent to the mixing pool Thursday for the actual discharge.

But Fukushima’s fisheries, tourism and economy — which are still recovering from the disaster — worry the release could be the beginning of a new hardship.

Fukushima’s current fish catch is only about one-fifth its pre-disaster level, in part due to a decline in the fishing population. China has tightened radiation testing on Japanese products from Fukushima and nine other prefectures, halting exports at customs for weeks, Fisheries Agency officials said.

more

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

more