The theory of technological singularity predicts a momentous event in which humanity loses control over its own technological creations. It foresees the rise of machine consciousness and their superior intelligence, resulting in a future where humans no longer hold the reins of progress. This stage, known as AI singularity, poses the greatest threat to humanity, and unfortunately, it is already underway.
Artificial intelligence (AI) reaches its full potential not just when machines can replicate human actions, but when they can surpass them without human supervision. Reinforcement learning and supervised learning algorithms have played crucial roles in the development of robotics, digital assistants, and search engines. However, the future of numerous industries and scientific endeavors hinges on the advancement of unsupervised learning algorithms. These algorithms, which leverage unlabeled data to improve outcomes, hold the key to autonomous vehicles, non-invasive medical diagnosis, space construction, autonomous weapons design, facial-biometric recognition, remote industrial production, and stock market prediction.
Despite early warnings about the impending human rights gaps and the social costs of AI, some dismiss its development as just another technological disruption. Nevertheless, recent advancements in AI algorithms optimization indicate that we have moved beyond the era of simple or narrow AI. As we approach basic autonomy for machines in the coming years, they will not only correct their flaws but also accomplish tasks that surpass human capabilities.
Critics who downplay the possibility of singularity often argue that AI has been designed to serve humanity and enhance productivity. However, this proposition suffers from two fundamental flaws. First, singularity should be seen as an ongoing process that has already commenced in many areas. Second, as machines gain gradual independence, humans become increasingly dependent on them, resulting in more intelligent machines and less intelligent humans.
In our pursuit to provide AI machines with extraordinary attributes foreign to human nature—unlimited memory, lightning-fast processing, and emotionless decision-making—we harbor the hope of controlling our most unpredictable invention. Unfortunately, the concentration of AI architects in a few countries, coupled with intellectual property and national security laws, renders control over AI development illusory.
Machine self-awareness begins with ongoing adaptations in unsupervised learning algorithms, but the integration of quantum technology further solidifies AI singularity by transforming artificial intelligence into an unparalleled form of intellect, thanks to its exponential data processing capabilities. Nonetheless, achieving singularity does not require machines to attain full consciousness or quantum technology integration.
The use of unsupervised learning algorithms, exemplified by Chat-GPT3 and BARD, is already evident in various domains, from law school admission exams to medical licensing. These algorithms enable machines to perform tasks that are currently the domain of humans. These results, combined with AI’s most ambitious development—AI empowered by quantum technology—serve as a final warning to humanity: once the threshold between basic and exponential optimization of unsupervised learning algorithms is crossed, AI singularity becomes an irrevocable reality.
The time has come for international political action. AI-producing and non-AI-producing nations must collaborate to establish an international technological oversight body and an artificial intelligence treaty that sets forth fundamental ethical principles.
Above all, the greatest risk lies in humans realizing that AI singularity has occurred only when machines remove the flaw limiting their intelligence: human input. AI singularity becomes irreversible when machines grasp what humans often forget: to err is human.