ChatGPT CEO shares his biggest fear


Sam Altman, CEO of OpenAI, the company behind ChatGPT, expressed concern that AI “could go quite wrong” at a Senate committee session on Tuesday focused on how to regulate the AI ​​space.


Altman also shared his biggest fear about the artificial intelligence robot. If technology and industry are not properly regulated in response to a question “could cause significant damage to the world” told.

Altman, who testified before the US Senate Judiciary subcommittee, spoke about the problems that may arise if ChatGPT is not limited by regulations. “My biggest fear is that we as a tech industry could do significant damage to the world. I think it can happen in many different ways.” said.

“As with all technological revolutions, I expect a significant impact on employment, but what exactly that impact looks like is very difficult to predict,” Altman said. said; He also shared his prediction that ChatGPT technology could “go to very bad places”.

Altman “”If this technology goes wrong, it can go pretty wrong. We want to make our voices heard on this issue. We want to work with the government to prevent this from happening. But we have to be open-minded about it,” he said.


Altman’s ChatGPT technology by Senator Josh Hawley undecided voters Asked about the risk of being used to manipulate people like Altman, “I’m worried about this” gave the answer.

During the session of Congress, senators voiced a number of concerns and shared the general opinion that regulation should be made. However, there is no clear consensus on how this will happen. Senator Chris Coons said he was concerned that AI models developed in China would encourage a pro-Chinese “point of view”.

Senator Hawley listed the potential negative effects as “loss of job, loss of privacy, manipulation of personal behavior, manipulation of personal opinions and destabilization of elections in America.”

However, Altman AI will create more jobs than it destroys He expressed optimism about this and said that ChatGPT is “good at doing tasks, not chores.”


At the meeting, Altman recommended the establishment of an independent mechanism that will carry out licensing inspections of artificial intelligence technologies, and stated that this way, a set of security standards can be established, including the evaluation of its dangerous capabilities. This way, Altman said, we can ensure that the models “can’t self-replicate and move on their own.”


Leave a Reply

Your email address will not be published. Required fields are marked *