Anthropic, the lesser known rival of OpenAI, is reportedly in talks with several tech majors, including Google, to raise $2 billion in new funding. The move comes on the heels of a commitment from Amazon to invest $1.25 billion in Anthropic. While the numbers are staggering, when seen together with Microsoft’s multi-billion dollar investment in OpenAI, they reveal a disturbing trend: the concentration of AI in the hands of Big Tech.
It might sound far-fetched considering that most of the AI advancements are led by startups such as Anthropic and OpenAI, it’ll make sense when you consider that Big Tech acts as enablers for these startups to develop the large language models (LLMs) that are at the core of their AI tech.
Read: GITEX Global, Expand North Star to spotlight booming AI economy
Wool over our eyes
AI depends on large amounts of data, and enough computing power to process it in order to build, train, and tune the LLMs. Going through the traditional route of cobbling together the required resources on-site the AI startups offload this on to cloud.
For instance, when Amazon pledged to invest in Anthropic, it said as part of the deal, the startup would be using Amazon’s custom chips to train its new AI models. In fact, Anthropic has used Amazon’s cloud service since 2021, and also uses Google’s cloud service. In the same vein, of the billions of dollars that OpenAI has raised from Microsoft, much of it has been spent on the tech giant’s AI business Azure.
What this means, according to a recent report from research institute AI Now, is that the AI startups should really be seen as barnacles on the hull of Big Tech.
Read: Dubai unveils innovative ‘Dubai AI’ city concierge
The report claims that new large-scale general purpose AI models, such as GPT-3.5 and ChatGPT are being promoted by industry as “foundational” and a major turning point for scientific advancement in the field.
“These narratives distract from what we call the “pathologies of scale” that become more entrenched every day: large-scale AI models are still largely controlled by Big Tech firms because of the enormous computing and data resources they require,” the report asserts.
Instead of falling for these narratives, the report suggests we should confront the core problem that AI presents, which is the concentration of economic and political power in the hands of the tech industry in general and Big Tech in particular, and the danger of encoding harmful bias into LLMs.
Reign in the beast
According to AI Now, AI models must be subject to regulatory scrutiny without delay, given how fast they are being rolled out to the public.
A robust AI regulation shouldn’t just scrutinize the input data, but also be able to dig into the related design choices at the stage of model development, in order to weed out risks of discrimination and bias.
This is already being debated by the European Union in the run up to their upcoming AI Act. It’s also planning a bill to hold AI developers liable by giving people the right to sue for damages if harmed by an AI system.
In the same vein, this is also the opportune time for the major economies in the Middle East, particularly, the UAE, and Saudi Arabia, to take the lead and ensure that AI deployments are free of bias.
Read: Abu Dhabi University defines artificial intelligence (AI) as universal skill
In 2017, the UAE appointed a Minister of State for Artificial Intelligence and rolled out a national AI strategy in its bid to become one of the world leaders in AI by 2031. One of the objectives of the strategy is to ensure strong governance by regulating AI.
Dubai, which earlier this year saw the launch of the Dubai Centre for Artificial Intelligence, seems to be heading in the right direction with its Ethical AI Toolkit to provide some guidance on the development and ethical use of AI.
Although this is a step in the right direction, the country should capitalize on this and provide stronger, and binding regulations to ensure that upcoming AI rollouts are not just boost innovation, but free of bias and discrimination.
For more tech news, click here.