A voluntary AI safety code – is it enough?
At least 16 companies have signed up to voluntary artificial intelligence safety standards introduced at the Bletchley Park summit.
They include companies from China and the UAE.
The signatories have committed to voluntarily “work toward” information sharing, “invest” in cybersecurity and “prioritise” research into societal risks.
However, the standards have faced criticism for lacking teeth due to their voluntary nature.
Speaking at a follow-up to the Bletchley Park event in Seoul, The UK’s technology secretary, Michelle Donovan, said the Seoul event “really does build on the work that we did at Bletchley”.
While it is good that there are now standards to be worked towards, Fran Bennett, the interim director of the Ada Lovelace Institute., has warned that while they remain voluntary there is a risk that many companies will ignore them.
“It’s great to be thinking about safety and establishing norms, but now you need some teeth to it: you need regulation, and you need some institutions which are able to draw the line from the perspective of the people affected, not of the companies building the things, she said.”
There is no doubts that the launch of ChatGPT has stimulated what has been called an “arms Race” in AI.
“There are just not enough people who “understand how to make these systems, how to make them really perform, and how to solve some of the challenges going forward,” says Andrew Rogoyski, director of innovation at the Surrey Institute for People-Centred AI at the University of Surrey.