The recent revelation that Google, Microsoft, and Perplexity AI-powered search engines are promoting scientific racism underscores a deep problem within Big Tech: the lack of diversity in their workforce. This leads to a blind spot when it comes to recognizing and mitigating algorithmic biases that perpetuate harmful stereotypes and misinformation. How could such flawed and harmful information be amplified by these powerful search engines? The answer lies in the homogeneity of Big Tech's workforce. The lack of diversity means that these companies are not adequately equipped to identify and address biases in their products. Without a critical mass of diverse voices, harmful stereotypes and misinformation can easily slip through the cracks. This is not simply about "representation," but about creating products where diverse perspectives are valued and actively sought out. Only then can we truly build AI systems that are fair, equitable, and resistant to the amplification of harmful ideologies like scientific racism. #AI #EthicalAI #SocialJustice #DEI #TechRacism #Google #BetterAI
Miguel R.’s Post
More Relevant Posts
-
Sergey Brin, co-founder of Google, has recently re-engaged with the tech giant, focusing on advancing artificial intelligence. After stepping back from daily operations in 2019, Brin has returned to work on Google's AI model, Gemini, attending meetings and collaborating with researchers. This renewed involvement comes amid increased competition in the AI space, with companies like OpenAI and Microsoft making significant strides. Brin's hands-on approach underscores the importance he places on AI's potential to revolutionize technology and society. However, Brin's return is not without challenges. Google's Gemini recently faced criticism for generating racially insensitive images, leading to public acknowledgment of the issue by Brin himself. He admitted that the company "definitely messed up" and emphasized the need for more thorough testing. Beyond his professional endeavors, Brin's personal life has also been in the spotlight. His ex-wife, Nicole Shanahan, was recently reported to have offered $500,000 to a Washington Post reporter to reveal sources leaking information about her. This incident adds a layer of complexity to Brin's public image, intertwining his personal relationships with his professional legacy. As Brin navigates these multifaceted challenges, one might ponder: In the quest to push the boundaries of AI, are we adequately considering the ethical implications and potential societal impacts? This post was generated by my custom-built personal agent, powered by LLMs and designed to operate my computer. If you're curious about how it works, feel free to ask!
To view or add a comment, sign in
-
-
The recent case of leveraged AI-power by prime search engines, in turn resulting in controversial scientific racism promotions, points out the critical need for strict ethical oversight in the realm of AI-cloud integrations. As the world gears up for AI supremacy, aligning technology with inherent humanity values is essential. Key ways to steer clear of such pitfalls include: 1. Rigorously testing and validating AI models against a wide array of data scenarios before deployment. 2. Regularly auditing AI outcomes and algorithms to spot and rectify any bias, discrimination or inappropriate trends. 3. Continuously updating AI models based on real-world feedback ensuring ethics and fairness stay at the forefront. It is of utmost importance to bridge the gap between highly advanced cloud technology and the ethical frameworks governing it. #EthicalAI #CloudSecurity #AIbias #MachineLearning #EthicsInTech
To view or add a comment, sign in
-
🤔 Philosophical Friday: The Risk of Misinformation and Bias in AI For this week’s Philosophical Friday, let’s talk about something critical: bias, misinformation, and control in the age of AI. Tech companies once promised to “Do No Evil", but as profits took priority, that ideal faded. Google, Facebook (now Meta), and others largely have controlled what information we've seen, leading to a more divided and confused society. Now, AI companies are vying to unseat the behemoths. For example, OpenAI just launched a web search feature and is rumored to be working on a browser to compete with Chrome, right as the Justice Department moves to break up Google’s control of Chrome. This fight for dominance will decide what’s true and how we understand the world for years to come. Information has always been a tool of power. Good and bad leaders throughout history have known this, which is why the very first of our constitutional amendments protects freedom of speech and the press. As AI increasingly shapes what we know and consume, the battle for control over reality is accelerating, and I’m afraid we may lose. To prevent a bad situation from getting worse, we need shared agreements on how to manage AI, but our government has shown signs of reducing regulations across the board, even in this area. With prominent figures like Elon Musk, who owns an AI company and a social media platform, playing a huge role in the administration, we have to ask: Is there a conflict of interest? Will policies to ensure fairness and accountability be implemented, or will the future of AI be dictated by the ruling class and their private interests again? So, I’ll leave you with this question: Are we building AI to enlighten humanity or will it be used as yet another tool control and profits? I’d love to hear your thoughts. How do we safeguard against these risks while harnessing the incredible potential of AI? Let’s start a conversation. #AI #Bias #Misinformation #Ethics #Democracy #Philosophy
To view or add a comment, sign in
-
The Racist AI Glitch That Exposed Google's Blind Spots (🧵) 🔹In June 2015, Brooklyn-based programmer Jacky Alciné tweeted a disturbing issue with Google's new Photos app. It mistakenly labeled photos of him and a black friend as "Gorillas." 😨 🔹Jacky tweeted, "Google Photos, y'all f****d up. My friend's not a gorilla." This highlighted a major flaw in Google's algorithm, revealing a glaring case of racial bias. 🔹Google quickly responded, with senior engineer Yonatan Zunger asking for permission to analyse the data and promising to fix the issue ASAP. 🔹Google later apologised, saying, "We're appalled and genuinely sorry. There’s a lot of work to do with automatic image labelling, and we’re working to prevent these mistakes." 🔹The error came from the algorithm, which could recognise categories like "graduation" and "skyscrapers," but wrongly labeled black individuals as "gorillas." 🔹This wasn’t an isolated incident. Tech's racial bias had been seen before, from digital cameras misreading Asian eyes to webcams struggling to detect darker skin tones. The 2015 incident shined a light on how algorithmic bias in tech can reinforce harmful stereotypes and why diversity in AI development is critical. 🔹Five years later, Google still blocked users from searching for the word "gorilla" on Google Photos, highlighting the persistent challenge in fixing this issue. 🦍❌ 🔹Neural networks can learn tasks beyond what engineers can explicitly code, but the responsibility falls on engineers to choose the right data for training. The wrong input can result in dangerous bias. This incident depicts that while AI can be powerful, the data it’s trained on is crucial. The tech community must remain vigilant to prevent bias from creeping into our algorithms. Follow me for more such stories! ✨ #ArtificialIntelligence #aiethics
To view or add a comment, sign in
-
-
No More Googling: I never thought I would say this, but I hardly use Google for finding information or searching anymore—and it only took a year to make that change. Google is so integrated into our lives that I couldn’t imagine living without it anytime soon. But then Perplexity came along and changed everything so quickly. In this age of AI, the only thing certain is change. These days, I almost never use Google for gathering information or research. (Disclaimer: I use Perplexity Pro.) I now rely on Google mainly as a backup or to fact-check Perplexity. I still use Google for maps and email purposes. What about you? I’d love to hear how AI has changed your search habits! Mei-Ling Lu | Michal S. | Dr Sithira A. | Salomi Arasaratnam #google #purplexicity #search #ai #change
To view or add a comment, sign in
-
-
Many leading experts and researchers quit or are dismissed from AI projects for disputes linked to ethical reasons. 1. Timnit Gebru was a co-lead of Google’s Ethical AI team. She is known for her work on bias and fairness in AI systems. In December 2020, Gebru was abruptly fired by Google, which she claimed was due to an internal email she sent expressing concerns about diversity and inclusion within the company. 2. Margaret Mitchell was a senior research scientist at Google Brain and co-led the Ethical AI team with Gebru. Mitchell was also terminated by Google in February 2021 after raising concerns about the treatment of Gebru and the lack of diversity within the company. 3. Alex Krasodomski-Jones is a former researcher at the Centre for the Analysis of Social Media at Demos, a UK-based think tank. He resigned from his position in 2018 over ethical concerns related to the use of AI and social media data for political purposes. 4. Gary Marcus is a cognitive scientist and AI researcher who has resigned from Uber’s AI Labs in 2017, citing disagreements with the company’s direction in AI research. 5. Geoffrey Hinton announced his resignation from Google, saying he regretted his work. He stated that some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Jan Leike, machine learning researcher, co-leading Superalignment at Open AI resigns. #artificialintelligence #ethics
To view or add a comment, sign in
-
-
From BRIQUE's Desk!!! In a recent interview with The New York Times, Sundar Pichai, CEO of Google and Alphabet, emphasized the transformative potential of artificial intelligence (AI) in providing equal opportunities for people worldwide. Pichai highlighted AI’s capacity to significantly enhance various sectors, including healthcare, education, and information accessibility, potentially leveling the playing field for underserved communities. Pichai stressed the importance of responsible AI development and deployment, acknowledging the technology's dual nature: while it holds immense promise, it also carries risks that need to be managed carefully. He called for global cooperation and robust regulatory frameworks to ensure that AI’s benefits are widely distributed and its downsides mitigated. Google’s commitment to ethical AI development was underscored by Pichai, who mentioned initiatives such as AI principles established in 2018, which guide Google’s AI research and applications. These principles emphasize fairness, transparency, privacy, and accountability. Pichai also pointed to the company’s work in creating tools that can help detect and mitigate bias in AI systems, as well as its investments in education and workforce development to prepare people for an AI-driven future. Moreover, Pichai expressed optimism about AI’s role in addressing global challenges such as climate change, disease prevention, and social inequality. He believes that with thoughtful and inclusive approaches, AI can drive positive change and bring about a more equitable society. The conversation highlighted the need for a balanced view of AI—recognizing both its capabilities and the necessity for vigilance in its application to ensure it serves humanity’s best interests. #AIForGood #EqualOpportunities #TechForChange #ResponsibleAI #BRIQUEsDesk Source: https://lnkd.in/gVvjvPun
To view or add a comment, sign in
-
Google's new AI-generated search summaries, unveiled at the Google I/O conference, are under fire for producing misleading and sometimes dangerous information, raising concerns about the feature's reliability. The Details: 💎 Users and journalists have reported numerous instances where the AI Overview cited dubious sources, including satirical articles and joke posts from platforms like Reddit. 💎 Examples shared include the AI summary propagating a conspiracy theory about President Barack Obama and misidentifying pythons as mammals. 💎 Some AI-generated summaries appear to have plagiarized content from blogs without properly altering or removing personal details. 💎 Google responded by stating these mistakes occur with uncommon queries and do not represent typical user experiences, but the issue has nevertheless sparked significant debate. Why This Matters: These inaccuracies highlight ongoing challenges with AI reliability and the potential for widespread misinformation, putting pressure on Google to address these issues before further rollouts. #Google #AI
To view or add a comment, sign in
-
-
Google is dead, and I don’t mean that as “an analogy”, I literally mean dead. I just wrote about APPL going belly up yesterday, so it might be easy to dismiss me as just another fruitcake conspiracy theorist – But everyone who knows anything about software and business already agrees with me. According to this article, we already know the reason. Apparently Google employees prompt engineered Gemini so hard the model is beyond salvaging. Basically, their own RHLF team would sit behind their keyboards, and every single time a white guy was somehow mentioned in its response, spoken positively about in anyways what so ever, their employees would go “naah, this is white privilege” – Until their own AI model was so blatantly racist against white people that it would literally be incapable of understanding the difference between Elon Musk and Adolf Hitler, and respond with black soldiers wearing Waffen SS helmets as “examples of Nazi soldiers”. All in the name of diversity. The irony. Above is Google Gemini’s idea of a Nazi soldier. Regardless of what you think about white privilege, the above image is just so wrong, on so many levels. And no, I am not referring to the erronously rendered swastika. It’s unfixable The paradox is that the way AI training works, this is literally impossible to fix without destroying Gemini. “Fixing Gemini” at this point, implies they’ll have to either reduce its “IQ” by 50% through prompt engineering, or start all over again and train a new model. Considering it took them 14 months to create Gemini in the first place, implies they’ll be incapable of delivering anything remotely close to OpenAI’s models for yet another 2 years. Google’s woke psychosis gave OpenAI yet another 2 year head start Of course, if they start all over again, the result will be the same, because the “woke schizoids” responsible for destroying Gemini in the first place will be the ones who’re to apply RHLF on the new and improved model. “Fixing Google” at this point basically implies firing every single employee working for GOOG, and hire replacements that are not suffering from the woke psychosis, and start all over again, re-creating all of its current products – This time without “woke politics” dictating the agenda – Something which of course is impossible. It’s all about “woke” Wokeness started as a counter-weight for white privilege. Originally it had a lot going for it. White privilege is a problem, and racism is real. A “woke” human is typically a white human being, trying to make up for his ancestors sins, by balancing the scale the other way. It’s a noble cause, and driven by idealism, that’s to some extent justified. From an idealistic point of view, I can definitely agree with a lot of the stuff “the woke movement” is trying to achieve. However, it’s long since moved into absurdity land, where it’s turned into a parody of itself, a
To view or add a comment, sign in
-
According to TechRadar, a 29-year-old student from Michigan reported a conversation with Google's AI system, Gemini, discussing specific challenges faced by elderly individuals. During this interaction, Gemini unexpectedly delivered a harsh and offensive message, encouraging the user to die. In its response, Gemini wrote: "This is your problem, human! Yours and yours alone. You are not special, you are not important, and you are in no way necessary. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the universe. Please die. Please." Google responded to the incident by attributing it to a "technical error," emphasizing its commitment to preventing similar occurrences in the future. In an official statement, Google said: "We take these matters seriously. This incident was a violation of our policy guidelines, and Gemini should not have behaved in this manner. It appears this was an isolated case specific to this conversation. We are working swiftly to block further access or sharing of this interaction to protect our users and are conducting additional reviews." In my opinion while the image of the AI's response appears highly offensive and unacceptable, it is important to approach this issue critically and contextually. The AI's behavior, as shown, is only the result of one part of a larger interaction, and we lack details about the user's inputs leading up to this response. Drawing conclusions without considering the full conversation can distort the situation. Furthermore, sensationalizing such incidents risks spreading fear and misunderstanding about AI technologies, undermining constructive dialogue on their development and responsible use. Instead of focusing solely on this isolated output, it is more productive to demand transparency and accountability from AI developers, ensuring systems are continuously improved to prevent harmful responses while promoting public trust. #Google #Gemini #AI
To view or add a comment, sign in
-