Connect with us

Published

on

A new study highlights how the deluge of online images may be subtly shaping our perceptions, revealing that search engine imagery is reinforcing gender stereotypes in a way that text-based searches do not. As more than 6 hours a day is spent online on average, visual input from social media feeds, websites, and digital ads contributes to a cycle that may be increasing implicit biases in society.

The recent research analyzed image search results for various occupations on platforms like Google, Wikipedia, and IMDb. It found that images were overwhelmingly gendered, particularly in fields historically viewed as male- or female-dominated. For example, searches for “heart surgeon,” “investment banker,” or “developer” predominantly returned images of men, while terms like “housekeeper” and “nurse practitioner” were associated almost exclusively with women.

The study went beyond just measuring bias in search results. Researchers conducted an experiment where 423 U.S. participants used Google to search for occupations, with some participants receiving visual representations through Google Images while others used text-based Google News searches. Those exposed to image results displayed a marked increase in implicit gender biases, as measured by a standard association test, even days after the experiment. The findings highlight the impact of image-heavy platforms like Instagram and TikTok in normalizing biased visuals, raising concerns that the sheer volume of gender-stereotyped images might be entrenching outdated perceptions.

Vicious Cycle of AI and Bias

The problem extends to AI models, which are trained on vast repositories of online content, including stereotyped images. When users ask AI platforms like ChatGPT to visualize various professions, they often receive outputs that reflect existing biases. A request for images of “doctor” or “scientist,” for example, may yield predominantly white male figures, reinforcing societal stereotypes. Similarly, descriptors such as “successful” or “smart” also skew heavily towards images of white men, reflecting the biases embedded in the data used to train these systems.

The study’s authors warn that this cycle could worsen as AI tools continue to rely on biased online content. “The rise of images in popular internet culture may come at a critical social cost,” they write, noting that these biases not only influence AI outputs but also shape user perceptions. The more biased imagery we encounter, the more normalized these stereotypes become, perpetuating a feedback loop of implicit bias.

Seeking Solutions and Reclaiming Visual Space

Amid growing concerns, experts point to several solutions to mitigate the spread of biased visual content. Technology companies bear much responsibility, though attempts to address the issue have sometimes led to overcorrections. Google’s AI tool Gemini, for instance, has been criticized for inserting diversity where it historically wouldn’t exist, sometimes producing historically inaccurate imagery. Yet, even with the best intentions, fixing ingrained biases remains a challenge for tech firms.

One approach recommended for individuals is to curate their social media feeds to follow diverse creators and photographers from around the world. Another option is the “digital detox,” as outlined in art entrepreneur Marine Tanguy’s The Visual Detox: How to Consume Media Without Letting It Consume You, which suggests limiting screen time and reclaiming time away from devices. Tanguy advocates setting daily app timers, deleting unused apps, and spending time outdoors to reduce reliance on screens.

Perhaps most importantly, experts stress the value of self-awareness in understanding how digital imagery influences our beliefs and perceptions. Unlike previous generations, modern users encounter a constant stream of images that subtly shape their worldviews. For much of human history, art and visual media were limited, yet today’s image-saturated environment is altering how we see others and ourselves, often without conscious realization.

As visual culture continues to expand online, recognizing these subtle influences may be crucial in building a more balanced and less biased digital world.

Technology

Two-Year-Old Boy Cancer-Free After Groundbreaking Treatment in North London

Published

on

By

A two-year-old boy from North London has become the youngest person to be treated for cancer using Nanoknife technology, and he is now cancer-free.

George, from Camden, was diagnosed with rhabdomyosarcoma (RMS), a rare and aggressive form of soft tissue cancer, in his liver and bile duct during the summer of 2023. His father, Jonathan, recalled the moment of the diagnosis as devastating. “I will never forget that moment,” Jonathan said. “It felt like my entire world had collapsed.”

After undergoing three rounds of chemotherapy, George was treated with Nanoknife technology at King’s College Hospital. This pioneering treatment uses electrical currents to target and destroy cancerous tissue. Surgeons successfully removed the tumor from George’s liver, ensuring clear margins around the affected area.

Jonathan expressed his relief and joy, saying, “The surgeons managed to remove all the tumor, and had clear margins all the way around the removed section of his liver. This was the news we’d been hoping and praying for.” He added that from the moment George was diagnosed, he and his family worked tirelessly to ensure their son received the best possible treatment. “We loved that the Nanoknife was something new and groundbreaking, and we felt we had some input into making it happen.”

Eighteen months after his initial diagnosis, George is now cancer-free and began attending nursery school in September. His recovery has been celebrated as a remarkable success, and he was recently awarded the Cancer Research UK for Children & Young People Star Award for his bravery throughout the treatment process.

George’s story highlights the potential of innovative medical technologies in the fight against cancer, offering hope to families facing similar challenges. The use of Nanoknife technology marks a significant step forward in the treatment of pediatric cancers, with George serving as a testament to the possibilities of new, life-saving treatments.

The family’s journey, while marked by fear and uncertainty, has ended on a hopeful note, with George’s future brighter than ever.

Continue Reading

Technology

Scientists Explore the Mystery of the Sun’s Lost Companion Star

Published

on

By

Our Sun, the central star of our Solar System, is somewhat of an anomaly in the Milky Way galaxy, where binary star systems—pairs of stars that orbit each other—are quite common. However, recent research suggests that the Sun may have once had a companion, a partner it has since lost to time. The big question now is: where did it go?

The Sun, orbiting in one of the Milky Way’s spiral arms, takes about 230 million years to make a full orbit around the galaxy. While it currently drifts alone, the nearest star to the Sun, Proxima Centauri, is located 4.2 light-years away—a distance so vast it would take thousands of years for even the fastest spacecraft to reach.

However, scientists are increasingly recognizing that most stars, unlike the Sun, form in pairs. In fact, binary star systems are so prevalent that some astrophysicists suggest that all stars may have originally formed as binary pairs. This leads to an intriguing question: could our Sun have once been part of such a system, only to lose its companion long ago?

Gongjie Li, an astronomer at the Georgia Institute of Technology, says it is certainly a possibility. “It’s very interesting,” he noted, pointing out that the absence of a companion star likely spared Earth from gravitational disruptions that might have made life on our planet impossible.

The idea that stars form in pairs is supported by recent findings. Sarah Sadavoy, an astrophysicist at Queen’s University in Canada, has shown that the process of star formation often leads to the creation of multiple stars. Her 2017 research indicated that star-forming regions, like the Perseus molecular cloud, preferentially create pairs of stars. However, not all stars in these systems remain together; some break apart within a million years.

If our Sun had a companion star, it likely would have had significant effects on our Solar System’s formation. For instance, Amir Siraj, an astrophysicist at Harvard University, suggests that the presence of such a companion could explain some of the features of the Oort Cloud—a vast, icy region far beyond Pluto. This distant shell of icy objects could have been influenced by the gravitational pull of the Sun’s missing twin, possibly even contributing to the hypothesized existence of Planet Nine, a yet-undiscovered planet in the outer reaches of our Solar System.

While finding our Sun’s companion star may be a difficult task, Konstantin Batygin, a planetary scientist at the California Institute of Technology, believes there may be clues yet to be uncovered. Recent simulations suggest that a binary companion could explain some of the structure of the Oort Cloud and the slight tilt of the Sun’s axis.

Despite the challenges, the idea that our Sun had a companion star raises intriguing questions about the formation of exoplanetary systems. As astronomers continue to explore distant regions of space, they may eventually uncover more evidence of our Sun’s lost twin—offering insights not only into the history of our own Solar System but also into the diverse ways stars and planets come into being across the universe.

Continue Reading

Technology

Journalism Body Urges Apple to Remove AI Feature After Misleading Headline

Published

on

By

A leading journalism group has called for Apple to remove its new generative AI feature following an incident in which the technology created a misleading headline about a high-profile murder case in the United States.

The BBC lodged a complaint with Apple after its Apple Intelligence tool, which uses artificial intelligence to summarise and group together notifications, falsely created a headline suggesting that Luigi Mangione, accused of the murder of healthcare insurance CEO Brian Thompson, had shot himself. The claim was inaccurate, as Mangione has not made any such action.

Following the error, Reporters Without Borders (RSF) voiced concerns about the risks posed by generative AI tools to media outlets. The group stressed that the incident demonstrated the AI’s unreliability and immaturity in providing trustworthy information to the public.

Vincent Berthier, head of RSF’s technology and journalism desk, stated, “AIs are probability machines, and facts can’t be decided by a roll of the dice.” He added that the misattribution of false information to a respected media outlet like the BBC undermines the credibility of both the news outlet and the public’s trust in the information they receive.

Apple Intelligence, which was launched in the UK last week, allows users to group notifications, including news summaries, to reduce interruptions from constant alerts. The feature is available on devices running iOS 18.1 or later, including the iPhone 16, iPhone 15 Pro, and iPhone 15 Pro Max, as well as some iPads and Macs.

The BBC spokesperson confirmed the corporation had contacted Apple regarding the issue, urging them to address the problem. However, it has not yet been confirmed if the company has responded. In addition to the misleading headline regarding Mangione, the notification summary also provided accurate details on unrelated topics, including the political situation in Syria and updates on South Korean President Yoon Suk Yeol.

This is not the first instance of Apple Intelligence misrepresenting news. In November, three articles from the New York Times were grouped together in one notification, which included the false claim that Israeli Prime Minister Benjamin Netanyahu had been arrested. The notification misrepresented an arrest warrant issued by the International Criminal Court, leading to confusion about the actual content of the articles. The New York Times has not commented on the incident.

Apple has yet to respond to the complaints, but the company’s notification feature has raised broader concerns regarding the reliability of AI-generated news summaries. While users can report issues with notifications, Apple has not disclosed how many reports it has received. As the debate continues, the accuracy of generative AI in journalism remains a hot topic.

Continue Reading

Trending