4 reasons why OpenAI Project Q* could be a danger to humanity

 

OpenAI's new project, that is codenamed Q* –– pronounced Q Star –– is working on a new model based on Artificial General Intelligence (AGI). The model is believed to have human-like reasoning and coginitive skills.


AI, coding, up schools, Uttar Pradesh, education department, basics of coding, computational thinking, artificial intelligence (AI), SCERT, NEP-2020, Yogi Adityanath, government schools, curriculum.
Representational Image

In Short

  • Q* is an AI model nearing AGI, demonstrating advanced reasoning similar to human cognitive functions.
  • Critics are concerned about the real-world applications and underlying risks, including job loss and unchecked power.
  • OpenAI boss Sam Altman has also expressed apprehension about the AGI project.

After Sam Altman’s fired again, hired again drama from last week, OpenAI is once again in news and this time for something that some researchers are calling a possible threat to humanity. The tech world is abuzz with excitement over OpenAI's undisclosed Artificial General Intelligence (AGI) initiative, known as Q* –– pronounced Q star. Despite being in its early phases, this AI project is hailed as a groundbreaking development in the quest for AGI, while some are seeing it as a danger to the human race.

Q* isn't your typical algorithm; it is an AI model nearing AGI. This means, unlike ChatGPT, Q* exhibits superior reasoning and cognitive skills compared. ChatGPT currently answers data based on facts that it has been fed, but with AGI, the AI model will learn reasoning and the ability to think and comprehend.

Essentially, Q* is a model-free method in reinforcement learning, departing from traditional models by not demanding prior knowledge of the environment. Instead, it learns through experience, adapting actions based on rewards and penalties. Tech experts believe that Q* will be able to display impressive capabilities, demonstrating advanced reasoning similar to human cognitive functions.

However, this very function that is the most impressive thing about the new AI model, has researchers and critics worrying and has left them apprehensive about the tech’s real-world applications and underlying risks. So much so that OpenAI boss Sam Altman has also shared concern about the AGI project, and many speculate that the Project Q* is that reason why Altman was suddenly laid off from the company. These concerns aren’t unfounded, and here are three reasons why we should all be wary of such a technology:

Altman's provocative comments on AGI as a "median human co-worker" has already fuelled worries about job security and the unbridled expansion of AI influence. This enigmatic algorithm is celebrated as an AGI development milestone. However, this milestone comes at a cost. The level of cognitive skills the new AI model promises, brings with it uncertainty. OpenAI scientists promise human-like thinking and reasoning capabilities with AGI, which means there is a lot we can not predict and know about the model. And the more that is unknown, the harder it gets to prepare to control or fix it.

Loss of jobs

Rapid disruptions in technology can outpace individual adaptation, leading to the loss of one or more generations unable to acquire the necessary skills or knowledge for adjustment. Which in turn means less people being able to keep their jobs. However, the answer isn’t as simple as just skilling people. Throughout history, certain individuals progress alongside technology, while others are left to navigate the challenges on their own.

The perils of unchecked power

In the hands of those lacking good intentions, an AI as powerful as Q* poses the risk of catastrophic consequences for humanity. Even with benevolent motives, the complex reasoning and decision-making of Q* may yield outcomes that are harmful, emphasising the crucial importance of thoughtfully assessing its applications.

We are scripting man vs machine in real life

It’s like the world never saw Man vs Machine. We say, scientists at OpenAI should go back and rewatch that movie. And since we are at it, go watch iRobot and Her, too. We need to take cues and be prepared for what is to come. An AI model capable of thinking and reasoning like a human being can anytime go rogue. Many may argue that scientists will surely know how to keep things under control, but when it comes to machines, you can never strike out the possibility of machines trying to take over.



Post a Comment

Previous Post Next Post