We have explored in multiple previous posts the pros and cons of Artificial Intelligence (AI), but we also took into consideration one significant concern - AI systems were released without proper regulations and a complete understanding of their potential impact.
This has meant that many companies started using AI as a means to reduce jobs rather than enhancing them. Embracing the technology and using AI as a tool should instead facilitate - rather than destroy - jobs and enhance existing operations.
Yesterday we have looked, for example, at a fashion collection in which a designer moved from Artificial Intelligence to find inspirations, but then, obviously, turned to her team to adapt some of the solutions suggested by AI.
In other posts we examined how AI-driven solutions can help healthcare professionals producing swifter cancer diagnosis or assess an individual's risk of heart disease just by scanning their eyes. Yet this still implies training medical figures to use the systems/software or pairing them with technical experts who can offer their assistance to health professionals. In brief, companies should strengthen their workforce through AI, not reduce it thinking that a machine can do the work of a human.
There is actually another issue about AI that is also proving challenging for governments all over the world and for developers of AI systems - establishing laws and norms that regulate it.
To this end the US Senate Majority Leader Chuck Schumer (D-NY), organized yesterday the AI Insight Forum. The closed-door meeting was attended by influential figures, including tech leaders like Microsoft's Bill Gates, Alphabet Inc. and Google's Sundar Pichai, Nvidia's Jensen Huang, OpenAI's Sam Altman, Meta's Mark Zuckerberg and Elon Musk, CEO of Tesla and X, the social network formerly known as Twitter.
The forum discussed the rise of AI, its potential and need for regulation. Attendees generally supported the idea of regulation, but there was little consensus on what these rules should entail.
Some ideas explored during the forum included the possibility of creating an independent agency to oversee AI development, enhancing transparency within companies, and maintaining the global competitiveness of the US compared to China and other countries.
Among the solutions, the forum discussed "watermarking" AI-created content, but not much else. Issues related to political elections, destabilization and the spread of disinformation were also raised.
The AI forum was preceded by Microsoft President Brad Smith and Nvidia Chief Scientist William Dally testifying on Tuesday alongside Woodrow Hartzog, a professor of law at Boston University School of Law, in Senate AI hearings concerning the regulation of AI.
Microsoft developed its in-house AI tool, Copilot, but also launched a series of collaborations: it invested $10 billion in OpenAI, then worked with Meta to launch and support the open-source large language model LLaMA 2. On the other hand, Nvidia is one of the primary beneficiaries of the AI surge, with its chips employed in numerous prominent AI applications, including ChatGPT.
Both Microsoft and Nvidia, commended Senate efforts to create a legal framework for certifying high-risk AI but stressed the need for robust enforcement mechanisms alongside ethical considerations. Besides, while acknowledging Congress's positive steps, Nvidia's Dally debunked fears of AI becoming uncontrollable, emphasizing human control over AI models.
Senators in general raised concerns about disinformation, deepfakes, data privacy, and child protection: Senator Richard Blumenthal advocated for a risk-based approach to AI regulation and introduced a bipartisan framework, that he co-authored with Josh Hawley, a Missouri Republican, aiming to establish an independent oversight body that would be tasked with licensing high-risk AI technology (the full text is not available yet, but Hawley urged Microsoft to raise the age limit for AI system use beyond 13).
Yet, despite the calls for regulations, it's evident that both the government and tech companies aren't too sure about what to do.
Politicians lack a clear understanding of how to approach AI, while tech companies often issue press releases or statements in which they claim they may have created powerful systems capable of unleashing Armageddon on humanity, but then openly make clear that these regulations should align with their interests.
Sam Altman, CEO of OpenAI, the maker of ChatGPT, has so far often displayed an ambivalent approach: a few months ago he was among the signatories of the Center for AI Safety's statement affirming "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war", but he also criticized the European Union's announcements regarding AI legislation as "over-regulating”.
The problem with these forums and discussions, is that they do not produce any decisions, besides the main mistake in the United States is that the government consistently seeks support from the same tech companies that are beyond these systems and that in turn ask for help, but then end up collaborating with the government to write regulations that prioritize their own interests (in 2020, Washington state passed the country's first bill regulating facial recognition; however, the legislation was authored by a state senator who also happened to be a Microsoft employee).
Wednesday's forum in Washington included representation from labor and civil liberties groups and also Elizabeth Shuler, the president of the labor union AFL-CIO; Maya Wiley, the president and CEO of the Leadership Conference on Civil & Human Rights; Meredith Stiehm, the president of Writers Guild of America (WGA), on strike since May for reasons also related to the use of AI in the film industry (members of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) joined them in July), and Randi Weingarten, the president of American Federation of Teachers.
Yet some senators, but also campaigners and digital rights activists considering the impact of AI on vulnerable populations, believe tech executives should testify in public, suggesting more transparency is needed.
What is indeed the point of having a round table or a forum without the people who have actually been negatively impacted by new technologies, but including the founders and CEOs of some of the social media platforms who in turn have already caused major disruptions by spreading disinformation and influencing the outcome of elections and referendums around the world? (politicians seem to be worried about deepfakes, but have we forgotten the Cambridge Analytica investigation?).
AI systems are already unintentionally contributing to biases in various domains and mainly hurting vulnerable groups and communities: biased algorithms can indeed discriminate against Black and brown individuals, immigrants and people with disabilities in sectors that go from banking and employment to surveillance and policing. These issues have been highlighted by the European Union, currently working on the AI Act.
The European Union has so far adopted a more proactive approach compared to the United States. In June, Margrethe Vestager, the EU's Competition Commissioner, also emphasized that discrimination represents a more immediate and substantial threat posed by Artificial Intelligence, outweighing the potential risk of human extinction.
At the Washington forum, Musk mentioned to reporters that companies need a referee, but this term remains abstract, as companies seem to want some kind of regulator who pretends to control things, but doesn't address the inherent biases within AI systems, while the debate about AI should be an open dialogue among governments, tech companies, advocacy groups, and the public. As AI technology rapidly advances, it is indeed crucial to strike a balance between innovation and ethics and make sure that the voices of those directly affected by AI are heard.
Comments
You can follow this conversation by subscribing to the comment feed for this post.