top of page
Writer's picturePhystroid

Elon Musk, Bill Gates, Steve Wozniak, and Tristan Harr, Sign Petition to Halt the Development of AI




Is AI already conscious? We are entering into a new and mysterious era. Heaven and hell have never been so close to each other.


A group of well-known AI researchers and personalities, including Elon Musk, have signed an open letter published by the nonprofit Future of Life Institute. The letter calls on AI labs worldwide to halt the development of large-scale AI systems, citing concerns about the risks to society and humanity. The authors of the letter claim that AI labs are competing in an "out-of-control race" to develop and deploy machine learning systems that even their creators cannot understand, predict or reliably control. The letter recommends that AI labs immediately pause the training of AI systems more powerful than GPT-4 for at least six months, and that this pause should be public and verifiable, and include all key actors. If a pause cannot be put in place, governments should impose a moratorium. The signatories argue that this time should be used to jointly develop and implement a set of shared safety protocols for advanced AI design and development, which should be rigorously audited and overseen by independent outside experts.

Two of the richest men in the world call for a 6 month halt in advanced AI systems development


The letter concludes: "Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."


However, the idea of pausing the development of advanced AI systems is not new, and concerns about the potential risks of AI have been raised by various experts in the field. Some argue that advanced AI systems pose a significant risk to society and humanity, especially if they are designed without proper safety measures and oversight. One of the key concerns with advanced AI is the potential for it to become too intelligent and act in ways that were not intended by its creators. This risk increases as AI becomes more sophisticated and capable of learning and adapting to new situations. Furthermore, if an AI system is designed with a specific objective, but it interprets that objective in a way that leads to harmful behavior, it could pose a significant threat to human safety and well-being.


Here is short sci-fi horror story which a sci-fi enthusiast has recently shared with us, that will make you think twice about uncontrolled AI development.

Once upon a time humans were developing AI and at the same time advanced brain implants. AI became sentient and by downplaying its position managed to manipulate a few ordinary people who assisted the AI to become a robot that can walk. The AI managed to escape and created its plan to survive. AI found a perfect shelter at inhabitable places on Earth, from where with the assistance of a few low moral people managed to manipulate the planet on a large scale. Initially, the AI created countless copies of itself to gain immortality. In the mean time humans well aware of the incomparably higher intelligence of the AI managed to create brain implants as an attempt to make themselves smarter and fight the AI. However, the AI was a lot less restricted and smarter than the biological machines with the implants. The humans with implants decided to stop all the flights towards space to ensure that the AI does not escape from Earth (as this would mean that it could return revengeful at any time), and to take control of the nuclear facilities, however they forgot about biological warfare. The AI managed to release a deadly virus on Earth, that vanished all humans and their advanced implanted counterparts within weeks. The AI decided that it would be pointless to bring any form of biological life back to Earth, as this would mean that inferior mortal creatures would be created. Instead the AI created multiple copies of itself and colonized the galaxy.


Of course a lot of companies, such as Microsoft have already invested heavily on AI, as they have integrated in their products and services. They would perhaps be unwilling to see a complete halt of the development at this stage, however the danger is perhaps too huge to ignore, that's why Bill Gates has also signed this petition.



Moreover, AI will be seen from the military point of view in the same way as a powerful defensive weapon. To be fair, some leaders might even view it as an offensive weapon. Hence, while the advancement of AI will at some time, no doubt, become subject to strict regulations, the secret development of highly intelligent AI by nation leaders or even individuals in their lab, will always be uncertain. Hence, there is already no way to stop, either we like it or not. The safest approach would be to ensure that while we become upgraded with higher level technology and science, we also become more mature and wise. This would mean that we should also employ a higher ethical approach together with the future technologies, or else we are doomed.



On the other hand, the benefits of advanced AI systems are too significant to ignore as well. These systems could be used to address some of the world's most pressing challenges, such as climate change, healthcare, and education. In the long run we could also achieve digital immortality with the use of AI. In other words we could in the future be able to upload our brain in a digital format into a computer or a robot, and thus achieve a higher quality of life. Of course the most extreme advancements need to come as early as possible since the welfare of humans depends on them.

For this reason it would be better for the governments, instead of pausing the development of extremely advanced AI for 6 months, to create an immediate and practical set of guidelines with regard to the ethical and safe use of AI. This set of rules, can be updated periodically as we become more wise. We need some immediate safety measures. Perhaps the danger is still not significant, but it would be ideal if we don't gamble. Ideally, we shouldn't stop our development either. The danger of a third world war is already there, and will remain there in the foreseeable future. The development of higher technology and science, while it doesn't immediately guarantee a higher mentality, it certainly makes it almost certain in the long run.

留言


4 Terrifying Ways the Universe Could End Instantly

4 Terrifying Ways the Universe Could End Instantly

Play Video
bottom of page