Prominent Authorities and Leading AI Experts Emphasised The Need of Highlighting Safety and Ethical Issues in AI technology
Date : 24th October, 2023
THE SOIL – Prominent authorities on artificial intelligence have recommended that, in order to support the responsible and moral implementation of these systems, governments and AI companies allocate at least 33 percent of their budgets to research and development. Leading AI experts emphasised the need of highlighting safety and ethical issues in AI technology in a statement released on Tuesday, without changing the main points or the facts.
A week prior to the International AI Safety Summit that will take place in London, a letter has been sent offering a series of suggestions for companies and governments. These recommendations are meant to lessen the risks that artificial intelligence may bring.
"AI Models are too powerful, and too significant," Says Yoshua Bengio
“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” reads the letter signed by more than a dozen eminent AI scholars, three Turing Award winners, and a Nobel laureate.
There are currently no comprehensive regulations pertaining to the safety of artificial intelligence, and the first set of regulations put out by the European Union have not yet been implemented because of unresolved issues among legislators.
One of the three individuals referred to as the godfather of artificial intelligence, Yoshua Bengio, stated that “recent state-of-the-art AI models are too powerful, and too significant, to let them develop without democratic oversight.”
Prominent Individuals Including Dawn Song, Yuval Noah Harari, Andrew Yao, Daniel Kahneman, and Geoffrey Hinton
Prominent individuals including Dawn Song, Yuval Noah Harari, Andrew Yao, Daniel Kahneman, and Geoffrey Hinton have endorsed the letter.
Prominent academics and well-known business executives, including Elon Musk, have expressed worries about the possible risks linked with artificial intelligence (AI) since the release of OpenAI’s generative AI models. They have even suggested that the development of strong AI systems be put on hold for a period of six months.
Companies May Claim that Complying with Regulations Presents Serious Difficulties
In response, some businesses have voiced worries regarding the significant costs related to complying with regulatory standards and the possible unequal distribution of legal obligations.
Companies may claim that complying with regulations presents serious difficulties and that “regulation stifles creativity.” Such assertions seem ill-founded,” British computer specialist Stuart Russell said.